Showing posts with label AI outputs. Show all posts
Showing posts with label AI outputs. Show all posts

Wednesday, June 25, 2025

Ball State University Libraries Launches Research Guide on Ethical AI Use; Ball State University, June 24, 2025

Ball State University; Ball State University Libraries Launches Research Guide on Ethical AI Use

"In an era in which artificial intelligence tools are rapidly reshaping how we access and share information, Ball State University Libraries has introduced a new research guide to help students, faculty, staff, and community members use AI more thoughtfully and effectively.

The interactive guide, now available at bsu.libguides.com, equips users with foundational skills to assess the credibility, accuracy, and ethical implications of generative AI tools like ChatGPT and image generators. Through five short videos and practical examples, the guide teaches users to identify potential misinformation, recognize AI-generated bias, and apply AI output in meaningful and responsible ways.

Key learning outcomes include:"

Tuesday, June 24, 2025

Anthropic’s AI copyright ‘win’ is more complicated than it looks; Fast Company, June 24, 2025

 CHRIS STOKEL-WALKER, Fast Company;Anthropic’s AI copyright ‘win’ is more complicated than it looks

"And that’s the catch: This wasn’t an unvarnished win for Anthropic. Like other tech companies, Anthropic allegedly sourced training materials from piracy sites for ease—a fact that clearly troubled the court. “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup wrote, referring to Anthropic’s alleged pirating of more than 7 million books.

That alone could carry billions in liability, with statutory damages starting at $750 per book—a trial on that issue is still to come.

So while tech companies may still claim victory (with some justification, given the fair use precedent), the same ruling also implies that companies will need to pay substantial sums to legally obtain training materials. OpenAI, for its part, has in the past argued that licensing all the copyrighted material needed to train its models would be practically impossible.

Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction: The U.S.,” she says. “I think they don’t entirely have purchase over this thing about whether or not it was transformative in the sense of changing Claude’s output.”"

The copyright war between the AI industry and creatives; Financial Times, June 23, 2025

, Financial Times ; The copyright war between the AI industry and creatives

"One is that the government itself estimates that “creative industries generated £126bn in gross value added to the economy [5 per cent of GDP] and employed 2.4 million people in 2022”. It is at the very least an open question whether the value added of the AI industry will ever be of a comparable scale in this country. Another is that the creative industries represent much of the best of what the UK and indeed humanity does. The idea of handing over its output for free is abhorrent...

Interestingly, for much of the 19th century, the US did not recognise international copyright at all in its domestic law. Anthony Trollope himself complained fiercely about the theft of the copyright over his books."

Wednesday, April 9, 2025

I’m Not Convinced Ethical Generative AI Currently Exists; Wired, February 20, 2025

 , Wired; I’m Not Convinced Ethical Generative AI Currently Exists

"For me, the ethics of generative AI use can be broken down to issues with how the models are developed—specifically, how the data used to train them was accessed—as well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the past—and continue to make—to obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call “open source” models hide the training datasets inside...

The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the user’s prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions."

Friday, December 27, 2024

The Job Interview Question Everyone Will Be Asking In 2025; Forbes, December 26, 2024

 Chris Westfall, Forbes; 

The Job Interview Question Everyone Will Be Asking In 2025

"Inside job interview questions, a new number one topic has emerged. Beyond the usual inquiries around your background and experience, the theme that’s top of mind is artificial intelligence (AI). The number one question every candidate should anticipate in 2025 is this one: How familiar are you with AI, and how are you using it? Here’s how to prepare, and respond, to the new number one job interview question.

As with any job interview question, the best answer usually involves a story. Because the minute you say, “I’m very familiar with AI,” the interviewer would like you to prove it. You can say you’re a genius, super empathetic, trustworthy, or the world’s fastest coder - the tricky part is providing credible evidence. Saying you are familiar with something is not the same as demonstrating it. That’s where soft skills like communication come into play."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Sunday, August 4, 2024

Music labels' AI lawsuits create copyright puzzle for courts; Reuters, August 4, 2024

 , Reuters; Music labels' AI lawsuits create copyright puzzle for courts

"Suno and Udio pointed to past public statements defending their technology when asked for comment for this story. They filed their initial responses in court on Thursday, denying any copyright violations and arguing that the lawsuits were attempts to stifle smaller competitors. They compared the labels' protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians...

The labels' claims echo allegations by novelists, news outlets, music publishers and others in high-profile copyright lawsuits over chatbots like OpenAI's ChatGPT and Anthropic's Claude that use generative AI to create text. Those lawsuits are still pending and in their early stages.

Both sets of cases pose novel questions for the courts, including whether the law should make exceptions for AI's use of copyrighted material to create something new...

"Music copyright has always been a messy universe," said Julie Albert, an intellectual property partner at law firm Baker Botts in New York who is tracking the new cases. And even without that complication, Albert said fast-evolving AI technology is creating new uncertainty at every level of copyright law.

WHOSE FAIR USE?

The intricacies of music may matter less in the end if, as many expect, the AI cases boil down to a "fair use" defense against infringement claims - another area of U.S. copyright law filled with open questions."

Tuesday, July 2, 2024

Navigate ethical and regulatory issues of using AI; Thomson Reuters, July 1, 2024

Thomson Reuters ; Navigate ethical and regulatory issues of using AI

"However, the need for regulation to ensure clarity, trust, and mitigate risk has not gone unnoticed. According to the report, the vast majority (93%) of professionals surveyed said they recognize the need for regulation. Among the top concerns: a lack of trust and unease about the accuracy of AI. This is especially true in the context of using the AI output as advice without a human checking for its accuracy."

Saturday, June 8, 2024

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

 Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Wednesday, May 29, 2024

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

 SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."