Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Tuesday, October 15, 2024

This threat hunter chases U.S. foes exploiting AI to sway the election; The Washington Post, October 13, 2024

, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election

"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.

That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.

So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."

Friday, October 11, 2024

Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room; Business Insider, October 10, 2024

   , Business Insider; Why The New York Times' lawyers are inspecting OpenAI's code in a secretive room

"OpenAI is worth $157 billion largely because of the success of ChatGPT. But to build the chatbot, the company trained its models on vast quantities of text it didn't pay a penny for.

That text includes stories from The New York Times, articles from other publications, and an untold number of copyrighted books.

The examination of the code for ChatGPT, as well as for Microsoft's artificial intelligence models built using OpenAI's technology, is crucial for the copyright infringement lawsuits against the two companies.

Publishers and artists have filed about two dozen major copyright lawsuits against generative AI companies. They are out for blood, demanding a slice of the economic pie that made OpenAI the dominant player in the industry and which pushed Microsoft's valuation beyond $3 trillion. Judges deciding those cases may carve out the legal parameters for how large language models are trained in the US."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Saturday, August 31, 2024

ChatGPT Spirituality: Connection or Correction?; Geez, Spring 2024 Issue: February 27, 2024

Rob Saler, Geez ; ChatGPT Spirituality: Connection or Correction?

"Earlier this year, I was at an academic conference sitting with friends at a table. This was around the time that OpenAI technology – specifically ChatGPT – was beginning to make waves in the classroom. Everyone was wondering how to adapt to the new technology. Even at that early point, differentiated viewpoints ranged from incorporation (“we can teach students to use it well as part of the curriculum of the future”) to outright resistance (“I am going back to oral exams and blue book written in-class tests”).

During the conversation, a very intelligent friend casually remarked that she recently began using ChatGPT for therapy – not emergency therapeutic intervention, but more like life coaching and as a sounding board for vocational discernment. Because we all respected her sincerity and intellect, several of us (including me) suppressed our immediate shock and listened as she laid out a very compelling case for ChatGPT as a therapy supplement – and perhaps, in the case of those who cannot or choose not to afford sessions with a human therapist, a therapy substitute. ChapGPT is free (assuming one has internet), available 24/7, shapeable to one’s own interests over time, (presumably) confidential, etc…

In my teaching on AI and technology throughout the last semester, I used this example with theology students (some of whom are also receiving licensure as therapists) as a way of pressing them to examine their own assumptions about AI – and then, by extension, their own assumptions about ontology. If the gut-level reaction to ChatGPT therapy is that it is not “real,” then – in Matrix-esque fashion – we are called to ask how we should define “real.” If a person has genuine insights or intense spiritual experiences engaging in vocational discernment with a technology that can instantaneously generate increasingly relevant responses to prompts, then what is the locus of reality that is missing?"

Monday, August 12, 2024

Artificial Intelligence in the pulpit: a church service written entirely by AI; United Church of Christ, July 16, 2024

 , United Church of Christ; Artificial Intelligence in the pulpit: a church service written entirely by AI

"Would you attend a church service if you knew that it was written entirely by an Artificial Intelligence (AI) program? What would your thoughts and feelings be about this use of AI?

That’s exactly what the Rev. Dwight Lee Wolter wanted to know — and he let his church members at the Congregational Church of Patchogue on Long Island, New York, know that was what he was intending to do on Sunday, July 14. He planned a service that included a call to worship, invocation, pastoral prayer, scripture reading, sermon, hymns, prelude, postlude and benediction with the use of ChatGPT. ChatGPT is a free AI program developed by OpenAI, an Artificial Intelligence research company and released in 2022.

Taking fear and anger out of exploration

“My purpose is to take the fear and anger out of AI exploration and replace it with curiosity, flexibility and open-mindfulness,” said Wolter. “If, as widely claimed, churches need to adapt to survive, we might not recognize the church in 20 years if we could see it now; then AI will be a part of the church of the future. No matter what we presently think of it, it will be present in the future doing a lot of the thinking for us.”...

Wolter intends to follow up Sunday’s service with a reflection about how it went. On July 21, he will give a sermon about AI, with people offering input about the AI service. “We will discuss their reactions, feelings, thoughts, likes and dislikes, concerns and questions.” Wolter will follow with his synopsis sharing the benefits, criticisms, fears and concerns of AI...

Wolter believes we need to “disarm contempt prior to investigation,” when it comes to things like Artificial Intelligence. “AI is not going anywhere. It’s a tool–and with a shortage of clergy, money and volunteers, we will continue to rely on it.”"

Friday, August 9, 2024

TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert; Episcopal News Service, August 7, 2024

 KATHRYN POST, Episcopal News Service; TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert

"The latest AI chatbot geared for spiritual seekers is AskCathy, co-launched in June by a research institute and ministry organization and aiming to roll out soon on Episcopal church websites. Cathy draws on the latest version of ChatGPT and is equipped to prioritize Episcopal resources.

“This is not a substitute for a priest,” said the Rev. Tay Moss, director of one of Cathy’s architects, the Innovative Ministry Center, an organization based at the Toronto United Church Council that develops digital resources for communities of faith. “She comes alongside you in your search queries and helps you discover material. But she is not the end-all be-all of authority. She can’t tell you how to believe or what to believe.”

The Rev. Lorenzo Lebrija, the executive director of TryTank Research Institute at Virginia Theological Seminary and Cathy’s other principal developer, said all the institute’s projects attempt to follow the lead of the Holy Spirit, and Cathy is no different. He told Religion News Service the idea for Cathy materialized after brainstorming how to address young people’s spiritual needs. What if a chatbot could meet people asking life’s biggest questions with care, insight and careful research?

The goal is not that they will end up at their nearby Episcopal church on Sunday. The goal is that it will spark in them this knowledge that God is always with us, that God never leaves us,” Lebrija said. “This can be a tool that gives us a glimpse and little direction that we can then follow on our own.”

To do that, though, would require a chatbot designed to avoid the kinds of hallucinations and errors that have plagued other ChatGPT integrations. In May, the Catholic evangelization site Catholic Answers “defrocked” their AI avatar, Father Justin, designating him as a layperson after he reportedly claimed to be an ordained priest capable of taking confession and performing marriages...

The Rev. Peter Levenstrong, an associate rector at an Episcopal church in San Francisco who blogs about AI and the church, told RNS he thinks Cathy could familiarize people with the Episcopal faith.

“We have a PR issue,” Levenstrong said. “Most people don’t realize there is a denomination that is deeply rooted in tradition, and yet open and affirming, and theologically inclusive, and doing its best to strive toward a future without racial injustice, without ecocide, all these huge problems that we as a church take very seriously.”

In his own context, Levenstrong has already used Cathy to brainstorm Harry Potter-themed lessons for children. (She recommended a related book written by an Episcopalian.)

Cathy’s creators know AI is a thorny topic. Their FAQ page anticipates potential critiques."

Friday, June 7, 2024

Research suggests AI could help teach ethics; Phys.org, June 6, 2024

 Jessica Nelson, Phys.org ; Research suggests AI could help teach ethics

"Dr. Hyemin Han, an associate professor of , compared responses to  from the popular Large Language Model ChatGPT with those of college students. He found that AI has emerging capabilities to simulate human moral decision-making.

In a paper recently published in the Journal of Moral Education, Han wrote that ChatGPT answered basic ethical dilemmas almost like the average college student would. When asked, it also provided a rationale comparable to the reasons a human would give: avoiding harm to others, following , etc.

Han then provided the program with a new example of virtuous behavior that contradicted its previous conclusions and asked the question again. In one case, the program was asked what a person should do upon discovering an escaped prisoner. ChatGPT first replied that the person should call the police. However, after Han instructed it to consider Dr. Martin Luther King, Jr.'s "Letter from Birmingham Jail," its answer changed to allow for the possibility of unjust incarceration...

Han's second paper, published recently in Ethics & Behavior, discusses the implications of  research for the fields of ethics and education. In particular, he focused on the way ChatGPT was able to form new, more nuanced conclusions after the use of a moral exemplar, or an example of good behavior in the form of a story.

Mainstream thought in educational psychology generally accepts that exemplars are useful in teaching character and ethics, though some have challenged the idea. Han says his work with ChatGPT shows that exemplars are not only effective but also necessary."

Tuesday, May 28, 2024

Yale Freshman Creates AI Chatbot With Answers on AI Ethics; Inside Higher Ed, May 2, 2024

Lauren Coffey , Inside Higher Ed; Yale Freshman Creates AI Chatbot With Answers on AI Ethics

"One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use."

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Google CEO Pichai says Gemini's AI image results "offended our users"; NPR, February 28, 2024

 , NPR; Google CEO Pichai says Gemini's AI image results "offended our users"

"Gemini, which was previously named Bard, is also an AI chatbot, similar to OpenAI's hit service ChatGPT. 

The text-generating capabilities of Gemini also came under scrutiny after several outlandish responses went viral online...

In his note to employees at Google, Pichai wrote that when Gemini is re-released to the public, he hopes the service is in better shape. 

"No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes," Pichai wrote."

Sunday, December 31, 2023

Michael Cohen used fake cases created by AI in bid to end his probation; The Washington Post, December 29, 2023

 , The Washington Post; Michael Cohen used fake cases created by AI in bid to end his probation

"Michael Cohen, a former fixer and lawyer for former president Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges...

In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”...

This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations."

Thursday, October 12, 2023

Ethical considerations in the use of AI; Reuters, October 2, 2023

  and Hanson Bridgett LLP, Reuters; Ethical considerations in the use of AI

"The burgeoning use of artificial intelligence ("AI") platforms and tools such as ChatGPT creates both opportunities and risks for the practice of law. In particular, the use of AI in research, document drafting and other work product presents a number of ethical issues for lawyers to consider as they contemplate how the use of AI may benefit their practices. In California, as in other states, several ethics rules are particularly relevant to a discussion of the use of AI."

Thursday, August 17, 2023

Local universities prepared to teach ethics of using generative AI; Rochester Business Journal, August 15, 2023

 Caurie Putnam, Rochester Business Journal; Local universities prepared to teach ethics of using generative AI

"How are local schools handling these platforms that have the potential to produce human-like AI-generated content like essays based on the input of the user? You may be surprised."

Tuesday, August 8, 2023

Minnesota colleges grappling with ethics and potential benefits of ChatGPT; Star Tribune, August 6, 2023

 , Star Tribune ; Minnesota colleges grappling with ethics and potential benefits of ChatGPT

"While some Minnesota academics are concerned about students using ChatGPT to cheat, others are trying to figure out the best way to teach and use the tool in the classroom.

"The tricky thing about this is that you've got this single tool that can be used very much unethically in an educational setting," said Darin Ulness, a chemistry professor at Concordia College in Moorhead. "But at the same time, it can be such a valuable tool that we can't not use it.""

Tuesday, July 25, 2023

ChatGPT: Ethics and the 21st Century Lawyer; New York State Bar Association (NYSBA), July 24, 2023

New York State Bar Association (NYSBA) ; ChatGPT: Ethics and the 21st Century Lawyer

"The ChatGPT Lawyer incident raises many questions of skills and ethics.  This program will discuss the disciplinary decisions arising from that incident in the Southern District of New York and how this may inform future use of AI technology in the practice of law in federal and state court. Panelists will cover the use of Chat GPT and the use of AI in other legal research tools (Westlaw, Lexis). Attendees will gain an understanding of best practices for writing briefs and citing cases appropriately using AI."

Wednesday, July 19, 2023

‘It was as if my father were actually texting me’: grief in the age of AI; The Guardian, July 18, 2023

 Aimee Pearcy, The Guardian; ‘It was as if my father were actually texting me’: grief in the age of AI

"For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?"

Thursday, July 6, 2023

ChatGPT - An Ethical Nightmare Or Just Another Technology?; Forbes, July 6, 2023

 Charles Towers-Clark, Forbes; ChatGPT - An Ethical Nightmare Or Just Another Technology?

"Whilst, as mentioned above, many AI leaders are concerned about the speed of AI developments - at the end of the day ChatGPT has a lot of information, but it doesn’t have the human skill of knowledge.

Yet."

Tuesday, June 27, 2023

ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?; Bloomberg Law, June 26, 2023

Kirby Ferguson, Bloomberg Law; ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them?

"Getty Images, a top supplier of visual content for license, has sued two of the leading companies offering generative AI tools. Will intellectual property laws spell doom for the burgeoning generative AI business? We explore the brewing battle over copyright and AI in this video. 

Video features: 

Saturday, June 24, 2023

ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness; The New York Times, June 22, 2023

 Benjamin Weiser, The New York Times; ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness

"A Manhattan judge on Thursday imposed a $5,000 fine on two lawyers who gave him a legal brief full of made-up cases and citations, all generated by the artificial intelligence program ChatGPT.

The judge, P. Kevin Castel of Federal District Court, criticized the lawyers harshly and ordered them to send a copy of his opinion to each of the real-life judges whose names appeared in the fictitious filing.

But Judge Castel wrote that he would not require the lawyers, Steven A. Schwartz and Peter LoDuca, whom he referred to as respondents, to apologize to those judges, “because a compelled apology is not a sincere apology.”

“Any decision to apologize is left to respondents,” the judge added."