Showing posts with label chatbots. Show all posts
Showing posts with label chatbots. Show all posts

Sunday, August 18, 2024

A.L.S. Stole His Voice. A.I. Retrieved It.; The New York Times, August 14, 2024

 , The New York Times; A.L.S. Stole His Voice. A.I. Retrieved It.

"As scientists continued training the device to recognize his sounds, it got only better. Over a period of eight months, the study said, Mr. Harrell came to utter nearly 6,000 unique words. The device kept up, sustaining a 97.5 percent accuracy.

That exceeded the accuracy of many smartphone applications that transcribe people’s intact speech. It also marked an improvement on previous studies in which implants reached accuracy rates of roughly 75 percent, leaving one of every four words liable to misinterpretation.

And whereas devices like Neuralink’s help people move cursors across a screen, Mr. Harrell’s implant allowed him to explore the infinitely larger and more complex terrain of speech.

“It went from a scientific demonstration to a system that Casey can use every day to speak with family and friends,” said Dr. David Brandman, the neurosurgeon who operated on Mr. Harrell and led the study alongside Dr. Stavisky.

That leap was enabled in part by the types of artificial intelligence that power language tools like ChatGPT. At any given moment, Mr. Harrell’s implant picks up activity in an ensemble of neurons, translating their firing pattern into vowel or consonant units of sound. Computers then agglomerate a string of such sounds into a word, and a string of words into a sentence, choosing the output they deem likeliest to correspond to what Mr. Harrell has tried to say...

Whether the same implant would prove as helpful to more severely paralyzed people is unclear. Mr. Harrell’s speech had deteriorated, but not disappeared.

And for all its utility, the technology cannot mitigate the crushing financial burden of trying to live and work with A.L.S.: Insurance will pay for Mr. Harrell’s caregiving needs only if he goes on hospice care, or stops working and becomes eligible for Medicaid, Ms. Saxon said, a situation that, she added, drives others with A.L.S. to give up trying to extend their lives.

Those very incentives also make it likelier that people with disabilities will become poor, putting access to cutting-edge implants even further out of their reach, said Melanie Fried-Oken, a professor of neurology at Oregon Health & Science University."

Saturday, June 29, 2024

The Voices of A.I. Are Telling Us a Lot; The New York Times, June 28, 2024

 Amanda Hess, The New York Times; The Voices of A.I. Are Telling Us a Lot

"Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management."

Monday, May 27, 2024

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

 CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Wednesday, March 20, 2024

Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini; TechCrunch, March 20, 2024

 Natasha LomasRomain Dillet , TechCrunch; Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini

"In a never-ending saga between Google and France’s competition authority over copyright protections for news snippets, the Autorité de la Concurrence announced a €250 million fine against the tech giant Wednesday (around $270 million at today’s exchange rate).

According to the competition watchdog, Google disregarded some of its previous commitments with news publishers. But the decision is especially notable because it drops something else that’s bang up-to-date — by latching onto Google’s use of news publishers’ content to train its generative AI model Bard/Gemini.

The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content."

Wednesday, December 20, 2023

Recent cases raise questions about the ethics of using AI in the legal system; NPR, December 15, 2023

 , NPR; Recent cases raise questions about the ethics of using AI in the legal system

"NPR's Steve Inskeep asks the director of the Private Law Clinic at Yale University, Andrew Miller, about the ethics of using artificial intelligence in the legal system...

INSKEEP: To what extent does someone have to think about what a large language model produces? I'm thinking about the way that we as consumers are continually given these terms of service that we're supposedly going to read and click I accept, and of course we glance at it and click I accept. You have to do something more than that as a lawyer, don't you?

MILLER: You're exactly right. A professor colleague said to me, you know, when a doctor uses an MRI machine, the doctor doesn't necessarily know every technical detail of the MRI machine, right? And my response was, well, that's true, but the doctor knows enough about how the MRI works to have a sense of the sorts of things that would be picked up on an MRI, the sorts of things that wouldn't be picked up. With ChatGPT we don't have - at least not yet - particularly well developed understanding of how our inputs relate to the outputs."

Thursday, October 19, 2023

Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'; NPR, October 19, 2023

, NPR ; Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'

"Amy Kurzweil said the chatbot project and the book that came out of it underscored her somewhat positive feelings about AI.

"I feel like you need to imagine the robot you want to see in the world," she said. "We're not going to stop progress. But we can think about applications of AI that facilitate human connection.""

Friday, August 11, 2023

Senator wants Google to answer for accuracy, ethics of generative AI tool; HealthcareITNews, August 9, 2023

 Mike Miliard, HealthcareITNews; Senator wants Google to answer for accuracy, ethics of generative AI tool

"Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google parent company Alphabet, on Aug. 8, seeking clarity into the technology developer's Med-PaLM 2, an artificial intelligence chatbot, and how it's being deployed and trained in healthcare settings."

Tuesday, August 8, 2023

Minnesota colleges grappling with ethics and potential benefits of ChatGPT; Star Tribune, August 6, 2023

 , Star Tribune ; Minnesota colleges grappling with ethics and potential benefits of ChatGPT

"While some Minnesota academics are concerned about students using ChatGPT to cheat, others are trying to figure out the best way to teach and use the tool in the classroom.

"The tricky thing about this is that you've got this single tool that can be used very much unethically in an educational setting," said Darin Ulness, a chemistry professor at Concordia College in Moorhead. "But at the same time, it can be such a valuable tool that we can't not use it.""

Tuesday, July 25, 2023

ChatGPT: Ethics and the 21st Century Lawyer; New York State Bar Association (NYSBA), July 24, 2023

New York State Bar Association (NYSBA) ; ChatGPT: Ethics and the 21st Century Lawyer

"The ChatGPT Lawyer incident raises many questions of skills and ethics.  This program will discuss the disciplinary decisions arising from that incident in the Southern District of New York and how this may inform future use of AI technology in the practice of law in federal and state court. Panelists will cover the use of Chat GPT and the use of AI in other legal research tools (Westlaw, Lexis). Attendees will gain an understanding of best practices for writing briefs and citing cases appropriately using AI."

Saturday, July 22, 2023

Tell us: are you using AI for emotional connection or support?; The Guardian, July 18, 2023

Guardian Community Team, The Guardian; Tell us: are you using AI for emotional connection or support?

We’d like to hear from people who are developing personal relationships with AI

"We’d like to find out more about people developing personal relationships with AI.

As some turn to chatbot impersonations of lost loved ones, we’d like to find out rather ways that people are using AI for emotional connection or support. What has worked for you and what hasn’t?

Let us know about your experiences below."

Wednesday, July 19, 2023

‘It was as if my father were actually texting me’: grief in the age of AI; The Guardian, July 18, 2023

 Aimee Pearcy, The Guardian; ‘It was as if my father were actually texting me’: grief in the age of AI

"For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?"

Thursday, July 13, 2023

A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.; The New York Times, July 11, 2023

The Ezra Klein Show, The New York Times; A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.

"Since the release of ChatGPT, huge amounts of attention and funding have been directed toward chatbots. These A.I. systems are trained on copious amounts of human-generated data and designed to predict the next word in a given sentence. They are hilarious and eerie and at times dangerous.

But what if, instead of building A.I. systems that mimic humans, we built those systems to solve some of the most vexing problems facing humanity?"

Wednesday, July 12, 2023

Inside the White-Hot Center of A.I. Doomerism; The New York Times, July 11, 2023

 Kevin Roose, The New York Times; Inside the White-Hot Center of A.I. Doomerism

"But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.

Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us...

And lastly, he made a moral case for Anthropic’s decision to create powerful A.I. systems, in the form of a thought experiment.

“Imagine if everyone of good conscience said, ‘I don’t want to be involved in building A.I. systems at all,’” he said. “Then the only people who would be involved would be the people who ignored that dictum — who are just, like, ‘I’m just going to do whatever I want.’ That wouldn’t be good.”"

Thursday, July 6, 2023

ChatGPT - An Ethical Nightmare Or Just Another Technology?; Forbes, July 6, 2023

 Charles Towers-Clark, Forbes; ChatGPT - An Ethical Nightmare Or Just Another Technology?

"Whilst, as mentioned above, many AI leaders are concerned about the speed of AI developments - at the end of the day ChatGPT has a lot of information, but it doesn’t have the human skill of knowledge.

Yet."

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""

Friday, June 9, 2023

The ChatGPT Lawyer Explains Himself; The New York Times, June 8, 2023

Benjamin Weiser and , The New York Times; The ChatGPT Lawyer Explains Himself

"Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”

Rebecca Roiphe, a New York Law School professor who studies the legal profession, said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law.

“This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”"