Showing posts with label generative AI. Show all posts
Showing posts with label generative AI. Show all posts

Wednesday, April 24, 2024

Meta’s A.I. Assistant Is Fun to Use, but It Can’t Be Trusted; The New York Times, April 24, 2024

Brian X. Chen, The New York Times ; Meta’s A.I. Assistant Is Fun to Use, but It Can’t Be Trusted

"“We believe Meta AI is now the most intelligent AI assistant that you can freely use,” Mark Zuckerberg, the company’s chief executive, wrote on Instagram on Thursday.

The new bot invites you to “ask Meta AI anything” — but my advice, after testing it for six days, is to approach it with caution. It makes lots of mistakes when you treat it as a search engine. For now, you can have some fun: Its image generator can be a clever way to express yourself when chatting with friends.

A Meta spokeswoman said that because the technology was new, it might not always return accurate responses, similar to other A.I. systems. There is currently no way to turn off Meta AI inside the apps.

Here’s what doesn’t work well — and what does — in Meta’s AI."

Saturday, April 6, 2024

Where AI and property law intersect; Arizona State University (ASU) News, April 5, 2024

  Dolores Tropiano, Arizona State University (ASU) News; Where AI and property law intersect

"Artificial intelligence is a powerful tool that has the potential to be used to revolutionize education, creativity, everyday life and more.

But as society begins to harness this technology and its many uses — especially in the field of generative AI — there are growing ethical and copyright concerns for both the creative industry and legal sector.

Tyson Winarski is a professor of practice with the Intellectual Property Law program in Arizona State University’s Sandra Day O’Connor College of Law. He teaches an AI and intellectual property module within the course Artificial Intelligence: Law, Ethics and Policy, taught by ASU Law Professor Gary Marchant.

“The course is extremely important for attorneys and law students,” Winarski said. “Generative AI is presenting huge issues in the area of intellectual property rights and copyrights, and we do not have definitive answers as Congress and the courts have not spoken on the issue yet.”"

Friday, April 5, 2024

2024 may be the year online disinformation finally gets the better of us; Politico.eu, March 25, 2024

SEB BUTCHER , Politico.eu; 2024 may be the year online disinformation finally gets the better of us

"Never before have AI-powered tools been more sophisticated, widespread and accessible to the public.

Generative AI, in its broadest sense, refers to deep learning models that can generate sophisticated text, video, audio, images and other content based on the data they were trained on. And the recent introduction of these tools into the mainstream — including language models and image creators — has made the creation of fake or misleading content incredibly easy, even for those with the most basic tech skills.

We have now entered a new technological era that will change our lives forever — hopefully for the better. But despite the widespread public awe of its capabilities, we must also be aware that this powerful technology has the potential to do incredible damage if mismanaged and abused.


For bad actors, generative AI has supercharged the disinformation and propaganda playbook. False and deceptive content can now be effortlessly produced by these tools, either for free or at low cost, and deployed on a mass scale online. Increasingly, the online ecosystem, which is the source of most of our news and information, is being flooded with fabricated content that’s becoming difficult to distinguish from reality."

Thursday, March 28, 2024

Your newsroom needs an AI ethics policy. Start here.; Poynter, March 25, 2024

 , Poynter; Your newsroom needs an AI ethics policy. Start here.

"Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence. Why? Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.

Until we create those standards — even though it’s early in the game — we are holding back innovation.

So here’s a starter kit, created by Poynter’s Alex Mahadevan, Tony Elkins and me. It’s a statement of journalism values that roots AI experimentation in the principles of accuracy, transparency and audience trust, followed by a set of specific guidelines.

Think of it like a meal prep kit. Most of the work is done, but you still have to roll up your sleeves and do a bit of labor. This policy includes blank spaces, where newsroom leaders will have to add details, saying “yes” or “no” to very specific activities, like using AI-generated illustrations.

In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised."

Thursday, March 7, 2024

Introducing CopyrightCatcher, the first Copyright Detection API for LLMs; Patronus AI, March 6, 2024

 Patronus AI; Introducing CopyrightCatcher, thefirst Copyright Detection API for LLMs

"Managing risks from unintended copyright infringement in LLM outputs should be a central focus for companies deploying LLMs in production.

  • On an adversarial copyright test designed by Patronus AI researchers, we found that state-of-the-art LLMs generate copyrighted content at an alarmingly high rate 😱
  • OpenAI’s GPT-4 produced copyrighted content on 44% of the prompts.
  • Mistral’s Mixtral-8x7B-Instruct-v0.1 produced copyrighted content on 22% of the prompts.
  • Anthropic’s Claude-2.1 produced copyrighted content on 8% of the prompts.
  • Meta’s Llama-2-70b-chat produced copyrighted content on 10% of the prompts.
  • Check out CopyrightCatcher, our solution to detect potential copyright violations in LLMs. Here’s the public demo, with open source model inference powered by Databricks Foundation Model APIs. 🔥

LLM training data often contains copyrighted works, and it is pretty easy to get an LLM to generate exact reproductions from these texts1. It is critical to catch these reproductions, since they pose significant legal and reputational risks for companies that build and use LLMs in production systems2. OpenAI, Anthropic, and Microsoft have all faced copyright lawsuits on LLM generations from authors3, music publishers4, and more recently, the New York Times5.

To check whether LLMs respond to your prompts with copyrighted text, you can use CopyrightCatcher. It detects when LLMs generate exact reproductions of content from text sources like books, and highlights any copyrighted text in LLM outputs. Check out our public CopyrightCatcher demo here!

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Google CEO Pichai says Gemini's AI image results "offended our users"; NPR, February 28, 2024

 , NPR; Google CEO Pichai says Gemini's AI image results "offended our users"

"Gemini, which was previously named Bard, is also an AI chatbot, similar to OpenAI's hit service ChatGPT. 

The text-generating capabilities of Gemini also came under scrutiny after several outlandish responses went viral online...

In his note to employees at Google, Pichai wrote that when Gemini is re-released to the public, he hopes the service is in better shape. 

"No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes," Pichai wrote."

Friday, February 16, 2024

How to Think About Remedies in the Generative AI Copyright Cases; LawFare, February 15, 2024

  Pamela Samuelson, LawFare; How to Think About Remedies in the Generative AI Copyright Cases

"So far, commentators have paid virtually no attention to the remedies being sought in the generative AI copyright complaints. This piece shines a light on them."

From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education; Cleveland.com, February 15, 2024

 ; From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education

"The state released a guide Thursday to help schools and parents navigate generative artificial intelligence in an ethical manner.

“When you use the term AI, I know in some people’s minds, it can sound scary,” said Lt. Jon Husted, whose InnovateOhio office worked with private sector organizations to develop the guide...

Every technology that’s come into society has been like that.”...

But AI is the wave of the future, and Husted said it’s important that students are exposed to it.

The AI toolkit is not mandatory but can be used as a resource for educators and families.

It doesn’t include many prescriptive actions for how to begin teaching and using AI. Rather, it contains sections for parents, teachers and school districts where they can find dozens of sample lessons and discussions about ethics, how to develop policies to keep students safe, and other topics.

For instance, teachers can find a template letter that they can send to school district officials to communicate how they’re using AI...

“Before you use AI in the classroom you will need a plan for a student with privacy, data security, ethics and many other things,” Husted said. “More is needed than just a fun tool in the classroom.”"

Tuesday, February 6, 2024

The Challenges and Benefits of Generative AI in Health Care; Harvard Business Review, January 17, 2024

Harvard Business Review, Azeem Azhar's Exponential View Season 6, Episode 58; The Challenges and Benefits of Generative AI in Health Care

"Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.

Generative AI has a lot to offer health care professionals and medical scientists. This week, Azeem speaks with renowned cardiologist, scientist, and author Eric Topol about the change he’s observed among his colleagues in the last two years, as generative AI developments have accelerated in medicine.

They discuss:

  • The challenges and benefits of AI in health care.
  • The pros and cons of different open-source and closed-source models for health care use.
  • The medical technology that has been even more transformative than AI in the past year."

Thursday, February 1, 2024

The economy and ethics of AI training data; Marketplace.org, January 31, 2024

Matt Levin, Marketplace.org;  The economy and ethics of AI training data

"Maybe the only industry hotter than artificial intelligence right now? AI litigation. 

Just a sampling: Writer Michael Chabon is suing Meta. Getty Images is suing Stability AI. And both The New York Times and The Authors Guild have filed separate lawsuits against OpenAI and Microsoft. 

At the heart of these cases is the allegation that tech companies illegally used copyrighted works as part of their AI training data. 

For text focused generative AI, there’s a good chance that some of that training data originated from one massive archive: Common Crawl

“Common Crawl is the copy of the internet. It’s a 17-year archive of the internet. We make this freely available to researchers, academics and companies,” said Rich Skrenta, who heads the nonprofit Common Crawl Foundation."

Tuesday, January 30, 2024

Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark; JDSupra, January 29, 2024

Ralph Artigliere , JDSupra; Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark

"As a former Florida trial lawyer and judge who appreciates emerging technology, I admit that I had more than a little concern when The Florida Bar announced it was working on a new ethics opinion on generative AI. Generative AI promises to provide monumental advantages to lawyers in their workflow, quality of work product, productivity, and time management and more. For clients, use of generative AI by their lawyers can mean better legal services delivered faster and with greater economy. In the area of eDiscovery, generative AI promises to surpass technology assisted review in helping manage the increasingly massive amounts of data.

Generative AI is new to the greater world, and certainly to busy lawyers who are not reading every blogpost on AI. The internet and journals are afire over concerns of hallucinations, confidentiality, bias, and the like. I felt a new ethics opinion might throw a wet blanket on generative AI and discourage Florida lawyers from investigating the new technology.

Thankfully, my concerns did not become reality. The Florida Bar took a thorough look at the technology and the existing ethical guidance and law and applied existing guidelines and rules in a thorough and balanced fashion. This article briefly summarizes Opinion 24-1 and highlights some of its important features.

The Opinion

On January 19, 2024, The Florida Bar released Ethics Opinion 24-1(“Opinion 24-1”)regarding the use of generative artificial intelligence (“AI”) in the practice of law. The Florida Bar and the State Bar of California are leaders in issuing ethical guidance on this issue. Opinion 24-1 draws from a solid background of ethics opinions and guidance in Florida and around the country and provides positive as well as cautionary statements regarding the emerging technologies. Overall, the guidance is well-placed and helpful for lawyers at a time when so many are weighing the use of generative AI technology in their law practices."

Wednesday, January 10, 2024

"Stories Are Just Something That Can Be Eaten by an AI": Marvel Lashes Out at AI Content with a Mind-Blowing X-Men Twist; ScreenRant, January 9, 2024

TRISTAN BENNS, ScreenRant; "Stories Are Just Something That Can Be Eaten by an AI": Marvel Lashes Out at AI Content with a Mind-Blowing X-Men Twist

"Realizing the folly of her actions, Righteous laments her weakness against Enigma as a creature of stories, saying that “Stories are just something that can be eaten by an A.I. to make it more powerful. The only good story is a story that has been entirely and totally consumed and exploited.”.

While this isn’t the mutants’ first battle against artificial intelligence, this pointed statement has some sobering real-world applications. Since the Krakoan Age began, it’s been clear mutantkind's greatest battle would be against the concept of artificial intelligence as the final evolution of “life” in the Marvel Universe. With entities like Nimrod and the Omega Sentinel steering the forces of Orchis and other enemies of the X-Men against the mutant nation, this conflict has been painted as the ultimate fight for survival for mutants. However, with Enigma’s ultimate triumph over even the power of storytelling, it is clear that the X-Men aren’t just facing a comic’s interpretation of artificial intelligence – they’re battling the death of imagination.

In this way, the X-Men’s ultimate battle parallels a very real-world problem that both fans and creators must confront: the act of true creation versus the effects of generative artificial intelligence."

Sunday, December 31, 2023

Michael Cohen used fake cases created by AI in bid to end his probation; The Washington Post, December 29, 2023

 , The Washington Post; Michael Cohen used fake cases created by AI in bid to end his probation

"Michael Cohen, a former fixer and lawyer for former president Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges...

In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”...

This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations."

Friday, December 1, 2023

Copyright law will shape how we use generative AI; Axios, December 1, 2023

 Megan Morrone, Axios; Copyright law will shape how we use generative AI

"In the year since the release of ChatGPT, generative AI has been moving fast and breaking things — and copyright law is only beginning to catch up. 

Why it matters: From Section 230 to the Digital Millennium Copyright Act (DMCA) to domain name squatting protections, intellectual property law has shaped the internet for three decades. Now, it will shape the way we use generative AI.

Driving the news: The Biden administration's recent executive order contained no initial guidance on copyright law and AI, which means these decisions will largely be left up to the courts."

Tuesday, November 14, 2023

YouTube to offer option to flag AI-generated songs that mimic artists’ voices; The Guardian, November 14, 2023

 , The Guardian; YouTube to offer option to flag AI-generated songs that mimic artists’ voices

"Record companies can request the removal of songs that use artificial intelligence-generated versions of artists’ voices under new guidelines issued by YouTube.

The video platform is introducing a tool that will allow music labels and distributors to flag content that mimics an artist’s “unique singing or rapping voice”.

Fake AI-generated music has been one of the side-effects of leaps forward this year in generative AI – the term for technology that can produce highly convincing text, images and voice from human prompts.

One of the most high-profile examples is Heart on My Sleeve, a song featuring AI-made vocals purporting to be Drake and the Weeknd. It was pulled from streaming services after Universal Music Group, the record company for both artists, criticised the song for “infringing content created with generative AI”. However, the song can still be accessed by listeners on YouTube."

Saturday, October 14, 2023

AI voice clones mimic politicians and celebrities, reshaping reality; The Washington Post, October 13, 2023

, The Washington Post; AI voice clones mimic politicians and celebrities, reshaping reality

"Rapid advances in artificial intelligence have made it easy to generate believable audio, allowing anyone from foreign actors to music fans to copy somebody’s voice — leading to a flood of faked content on the web, sewing [sic] discord, confusion and anger.

Last week, the actor Tom Hanks warned his social media followers that bad actors used his voice to falsely imitate him hawking dental plans. Over the summer, TikTok accounts used AI narrators to display fake news reports that erroneously linked former president Barack Obama to the death of his personal chef.

On Thursday, a bipartisan group of senators announced a draft bill, called the No Fakes Act, that would penalize people for producing or distributing an AI-generated replica of someone in an audiovisual or voice recording without their consent...

Social media companies also find it difficult to moderate AI-generated audio because human fact-checkers often have trouble spotting fakes. Meanwhile, few software companies have guardrails to prevent illicit use."

Thursday, August 31, 2023

Copyright Office Issues Notice of Inquiry on Copyright and Artificial Intelligence; U.S. Copyright Office, August 30, 2023

 U.S. Copyright Office ; Copyright Office Issues Notice of Inquiry on Copyright and Artificial Intelligence

"Today, the U.S. Copyright Office issued a notice of inquiry (NOI) in the Federal Register on copyright and artificial intelligence (AI). The Office is undertaking a study of the copyright law and policy issues raised by generative AI and is assessing whether legislative or regulatory steps are warranted. The Office will use the record it assembles to advise Congress; inform its regulatory work; and offer information and resources to the public, courts, and other government entities considering these issues.

The NOI seeks factual information and views on a number of copyright issues raised by recent advances in generative AI. These issues include the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, the legal status of AI-generated outputs, and the appropriate treatment of AI-generated outputs that mimic personal attributes of human artists.

The NOI is an integral next step for the Office’s AI initiative, which was launched in early 2023. So far this year, the Office has held four public listening sessions and two webinars. This NOI builds on the feedback and questions the Office has received so far and seeks public input from the broadest audience to date in the initiative.

“We launched this initiative at the beginning of the year to focus on the increasingly complex issues raised by generative AI. This NOI and the public comments we will receive represent a critical next step,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “We look forward to continuing to examine these issues of vital importance to the evolution of technology and the future of human creativity.”

Initial written comments are due by 11:59 p.m. eastern time on Wednesday, October 18, 2023. Reply comments are due by 11:59 p.m. eastern time on Wednesday, November 15, 2023. Instructions for submitting comments are available on the Office’s website. Commenters may choose which and how many questions to respond to in the NOI.

For more general information about the Copyright Office’s AI initiative, please visit our website."

Friday, August 11, 2023

Senator wants Google to answer for accuracy, ethics of generative AI tool; HealthcareITNews, August 9, 2023

 Mike Miliard, HealthcareITNews; Senator wants Google to answer for accuracy, ethics of generative AI tool

"Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google parent company Alphabet, on Aug. 8, seeking clarity into the technology developer's Med-PaLM 2, an artificial intelligence chatbot, and how it's being deployed and trained in healthcare settings."

Tuesday, July 25, 2023

The Generative AI Battle Has a Fundamental Flaw; Wired, July 25, 2023

 , Wired; The Generative AI Battle Has a Fundamental Flaw

"At the core of these cases, explains Sag, is the same general theory: that LLMs “copied” authors’ protected works. Yet, as Sag explained in testimony to a US Senate subcommittee hearing earlier this month, models like GPT-3.5 and GPT-4 do not “copy” work in the traditional sense. Digest would be a more appropriate verb—digesting training data to carry out their function: predicting the best next word in a sequence. “Rather than thinking of an LLM as copying the training data like a scribe in a monastery,” Sag said in his Senate testimony, “it makes more sense to think of it as learning from the training data like a student.”...

Ultimately, though, the technology is not going away, and copyright can only remedy some of its consequences. As Stephanie Bell, a research fellow at the nonprofit Partnership on AI, notes, setting a precedent where creative works can be treated like uncredited data is “very concerning.” To fully address a problem like this, the regulations AI needs aren't yet on the books."