Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, August 16, 2025

Margaret Boden, Philosopher of Artificial Intelligence, Dies at 88; The New York Times, August 14, 2025

  , The New York Times; Margaret Boden, Philosopher of Artificial Intelligence, Dies at 88

"As a philosopher of AI, Professor Boden was often asked if she thought that robots would, or could, take over society.

“The truth is that they certainly won’t want to,” she wrote in Aeon magazine in 2018.

Why? Because robots, unlike humans, don’t care.

“A computer’s ‘goals,’” she wrote, “are empty of feeling.”"

Larry Ellison Wants to Do Good, Do Research and Make a Profit; The New York Times, August 12, 2025

  Theodore Schleifer and , The New York Times; Larry Ellison Wants to Do Good, Do Research and Make a Profit

"Mr. Ellison has rarely engaged with the community of Giving Pledge signers, according to two people with knowledge of the matter. He has cherished his autonomy and does not want to be influenced to support Mr. Gates’s causes, one of the people said, while also sensitive to any idea that he is backing off the pledge.

But the stakes of Mr. Ellison’s message on X are enormous. His fortune is about 10 times what it was when he signed the pledge as the software company he founded, Oracle, rides the artificial intelligence boom. Mr. Ellison controls a staggering 40-plus percent of the company’s stock...

“Oxford, Cambridge and the whole university sector are under pressure to capitalize on intellectual property because of long-running government policy belief that the U.K. has fallen behind economically,” said John Picton, an expert in nonprofit law at the University of Manchester."

Monday, August 11, 2025

How Short-Term Thinking Is Destroying America; The New York Times, August 11, 2025

 Ben Rhodes , The New York Times;  How Short-Term Thinking Is Destroying America

"Unsurprisingly, the second Trump administration has binged on short-term “wins” at the expense of the future. It has created trillions of dollars in prospective debt, bullied every country on earth, deregulated the spread of A.I. and denied the scientific reality of global warming. It has ignored the math that doesn’t add up, the wars that don’t end on Trump deadlines, the C.E.O.s forecasting what could amount to huge job losses if A.I. transforms our economy and the catastrophic floods, which are harbingers of a changing climate. Mr. Trump declares victory. The camera focuses on the next shiny object. Negative consequences can be obfuscated today, blamed on others tomorrow."

Boston Public Library aims to increase access to a vast historic archive using AI; NPR, August 11, 2025

, NPR ; Boston Public Library aims to increase access to a vast historic archive using AI

"Boston Public Library, one of the oldest and largest public library systems in the country, is launching a project this summer with OpenAI and Harvard Law School to make its trove of historically significant government documents more accessible to the public.

The documents date back to the early 1800s and include oral histories, congressional reports and surveys of different industries and communities...

Currently, members of the public who want to access these documents must show up in person. The project will enhance the metadata of each document and will enable users to search and cross-reference entire texts from anywhere in the world. 

Chapel said Boston Public Library plans to digitize 5,000 documents by the end of the year, and if all goes well, grow the project from there...

Harvard University said it could help. Researchers at the Harvard Law School Library's Institutional Data Initiative are working with libraries, museums and archives on a number of fronts, including training new AI models to help libraries enhance the searchability of their collections. 

AI companies help fund these efforts, and in return get to train their large language models on high-quality materials that are out of copyright and therefore less likely to lead to lawsuits. (Microsoft and OpenAI are among the many AI players targeted by recent copyright infringement lawsuits, in which plaintiffs such as authors claim the companies stole their works without permission.)"

Wednesday, July 23, 2025

Now Trump Wants to Rename Artificial Intelligence to This; The Daily Beast, July 23, 2025

Erkki Forster, The Daily Beast; Now Trump Wants to Rename Artificial Intelligence to This

"President Donald Trump has set his sights on a new linguistic enemy. While speaking at an artificial intelligence summit Wednesday, Trump realized mid-thought that he doesn’t like the word “artificial” at all. “I can’t stand it. I don’t even like the name,” the 79-year-old president said. ”You know, I don’t like anything that’s artificial so could we straighten that out please?” he asked, pointing to someone in the audience. “We should change the name.” As disbelieving laughter rippled through the room, Trump insisted, “I actually mean that—I don’t like the name ‘artificial’ anything.” He then offered an alternative—one he often uses to describe himself: “It’s not artificial. It’s genius. It’s pure genius.”"

Tuesday, July 22, 2025

Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025

 ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague

"The Limits of GenAI’s Simulated Humanity

  • Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
  • True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
  • Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.  
  • Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
  • Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
  • Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."

Sunday, July 20, 2025

AI guzzled millions of books without permission. Authors are fighting back.; The Washington Post, July 19, 2025

 , The Washington Post; AI guzzled millions of books without permission. Authors are fighting back.


[Kip Currier: I've written this before on this blog and I'll say it again: technology companies would never allow anyone to freely vacuum up their content and use it without permission or compensation. Period. Full Stop.]


[Excerpt]

"Baldacci is among a group of authors suing OpenAI and Microsoft over the companies’ use of their work to train the AI software behind tools such as ChatGPT and Copilot without permission or payment — one of more than 40 lawsuits against AI companies advancing through the nation’s courts. He and other authors this week appealed to Congress for help standing up to what they see as an assault by Big Tech on their profession and the soul of literature.

They found sympathetic ears at a Senate subcommittee hearing Wednesday, where lawmakers expressed outrage at the technology industry’s practices. Their cause gained further momentum Thursday when a federal judge granted class-action status to another group of authors who allege that the AI firm Anthropic pirated their books.

“I see it as one of the moral issues of our time with respect to technology,” Ralph Eubanks, an author and University of Mississippi professor who is president of the Authors Guild, said in a phone interview. “Sometimes it keeps me up at night.”

Lawsuits have revealed that some AI companies had used legally dubious “torrent” sites to download millions of digitized books without having to pay for them."

US authors suing Anthropic can band together in copyright class action, judge rules; Reuters, July 17, 2025

  , Reuters; US authors suing Anthropic can band together in copyright class action, judge rules

"A California federal judge ruled on Thursday that three authors suing artificial intelligence startup Anthropic for copyright infringement can represent writers nationwide whose books Anthropic allegedly pirated to train its AI system.

U.S. District Judge William Alsup said the authors can bring a class action on behalf of all U.S. writers whose works Anthropic allegedly downloaded from "pirate libraries" LibGen and PiLiMi to create a repository of millions of books in 2021 and 2022."

Thursday, July 17, 2025

The Future of Weather Prediction Is Here. Maybe.; The New York Times, July 13, 2025

, The New York Times ; The Future of Weather Prediction Is Here. Maybe.

Thanks to A.I., companies like WindBorne hope to usher in a golden age of forecasting. But they rely in part on government data — and the agency that provides it is in turmoil.

"The good news is that we may be poised to enter a new golden age of A.I.-enabled weather prediction. That heat wave that scorched the East Coast last month? WindBorne says its software first flagged that 15 days out, two to four days before competing forecasts.

There’s a catch, though. These new deep learning forecasts are built on data provided for free by public science agencies. In the United States, that relationship is threatened by the Trump administration’s heavy cuts to the National Oceanic and Atmospheric Administration, or NOAA, which houses the National Weather Service."

Wednesday, July 16, 2025

Can Gen AI and Copyright Coexist?; Harvard Business Review, July 16, 2025

  and , Harvard Business Review ; Can Gen AI and Copyright Coexist?

"We’re experts in the study of digital transformation and have given this issue a lot of thought. We recently served, for example, on a roundtable of 10 economists convened by the U.S. Copyright Office to study the implications of gen AI on copyright policy. We recognize that the two decisions are far from the last word on this topic; both will no doubt be appealed to the Ninth Circuit and then subsequently to the Supreme Court. But in the meantime, we believe there are already many lessons to be learned from these decisions about the implications of gen AI for business—lessons that will be useful for leaders in both the creative industries and gen AI companies."

I’m Watching the Sacrifice of College’s Soul; The New York Times, July 14, 2025

 , The New York Times; I’m Watching the Sacrifice of College’s Soul

"At dinner recently with fellow professors, the conversation turned to two topics that have been unavoidable these past few years. The first was grade inflation — and the reality that getting A’s seldom requires any herculean effort and doesn’t distinguish one bright consultant-to-be from the next. Many students, accordingly, redirect their energies away from the classroom and the library. Less deep reading. More shrewd networking.gr

The second topic was A.I. Given its advancing sophistication, should we surrender to it? Accept that students will use it without detection to cull a semester’s worth of material and sculpt their paragraphs? Perhaps we just teach them how to fashion the most effective prompts for bots? Perhaps the future of college instruction lies in whatever slivers of mental endeavor can’t be outsourced to these digital know-it-alls.

And perhaps a certain idea of college — a certain ideal of college — is dying...

I’m not under the illusion that college used to be regarded principally in such high-minded terms. From the G.I. Bill onward, it has been held up rightfully as an engine of social mobility, a ladder of professional opportunity, yielding greater wealth for its graduates and society both.

But there was a concurrent sense that it contributed mightily to the civic good — that it made society culturally and morally richer. That feeling is now fighting for survival. So much over the past quarter century has transformed Americans’ relationship to higher education in ways that degrade its loftier goals. The corpus of college lumbers on, but some of its soul is missing."

Friday, July 11, 2025

Join Our Livestream: Inside the AI Copyright Battles; Wired, July 11, 2025

 Reece Rogers Wired; Join Our Livestream: Inside the AI Copyright Battles

"WHAT'S GOING ON right now with the copyright battles over artificial intelligence? Many lawsuits regarding generative AI’s training materials were initially filed back in 2023, with decisions just now starting to trickle out. Whether it’s Midjourney generating videos of Disney characters, like Wall-E brandishing a gun, or an exit interview with a top AI lawyer as he left Meta, WIRED senior writer Kate Knibbs has been following this fight for years—and she’s ready to answer your questions.

Bring all your burning questions about the AI copyright battles to WIRED’s next, subscriber-only livestream scheduled for July 16 at 12pm ET / 9am PT, hosted by Reece Rogers with Kate Knibbs. The event will be streamed right here. For subscribers who are not able to join, a replay of the livestream will be available after the event."

Thursday, July 10, 2025

An AI Ethics Roadmap Beyond Academic Integrity For Higher Education; Forbes, July 8, 2025

 Dr. Aviva Legatt,, Forbes; An AI Ethics Roadmap Beyond Academic Integrity For Higher Education

"Higher education institutions are rapidly embracing artificial intelligence, but often without a comprehensive strategic framework. According to the 2025 EDUCAUSE AI Landscape Study, 74% of institutions prioritized AI use for academic integrity alongside other core challenges like coursework (65%) and assessment (54%). At the same time, 68% of respondents say students use AI “somewhat more” or “a lot more” than faculty.

These data underscore a potential misalignment: Institutions recognize integrity as a top concern, but students are racing ahead with AI and faculty lack commensurate fluency. As a result, AI ethics debates are unfolding in classrooms with underprepared educators. “Faculty were expected to change their assignments overnight when generative AI hit,” said Jenny Maxwell, Head of Education at Grammarly. “We’re trying to meet institutions where they are—offering tools and guidance that support both academic integrity and student learning without adding more burden to educators.”

The necessity of integrating ethical considerations alongside AI tools in education is paramount. Employers have made it clear that ethical reasoning and responsible technology use are critical skills in today’s workforce. According to the Graduate Management Admission Council’s 2025 Corporate Recruiters Survey, these skills are increasingly vital for graduates, underscoring ethics as a competitive advantage rather than merely a supplemental skill. “Just because you think you’re an ethical person doesn’t mean you won’t inadvertently do harm if you’re working in machine learning without being trained and constantly aware of the risks,” said Liz Moran, Director of Academic Programs at SAS. “That’s why we’re launching an AI Foundations credential with a dedicated course on Responsible Innovation and Trustworthy AI. Students need the ethical reasoning to use those skills responsibly.”

Wednesday, July 9, 2025

Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership; Grand Forks Herald, July 5, 2025

  Kent Conrad and Saxby Chambliss , Grand Forks Herald; Viewpoint: Don’t let America’s copyright crackdown hand China global AI leadership


[Kip Currier: The assertion by anti-AI regulation proponents, like the former U.S. congressional authors of this think-piece, that requiring AI tech companies to secure permission and pay for AI training data will kill or hobble U.S. AI entrepreneurship is hyperbolic catastrophizing. AI tech companies can license training data from creators who are willing to participate in licensing frameworks. Such frameworks already exist for music copyrights, for example. AI tech companies just don't want to pay for something if they can get it for free.

AI tech companies would never permit users to scrape up, package, and sell their IP content for free. Copyright holders shouldn't be held to a different standard and be required to let tech companies monetize their IP-protected works without permission and compensation.]

Excerpt]

"If these lawsuits succeed, or if Congress radically rewrites the law, it will become nearly impossible for startups, universities or mid-size firms to develop competitive AI tools."

Tuesday, July 8, 2025

MyPillow CEO’s lawyers fined for AI-generated court filing in Denver defamation case; The Colorado Sun, July 7, 2025

Olivia Prentzel, The Colorado Sun ; MyPillow CEO’s lawyers fined for AI-generated court filing in Denver defamation case

"A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to pay $3,000 each after they used artificial intelligence to prepare a court filing that was riddled with errors, including citations to nonexistent cases and misquotations of case law. 

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the motion that had contained nearly 30 defective citations, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday."

Monday, July 7, 2025

Welcome to Your Job Interview. Your Interviewer Is A.I.; The New York Times, July 7, 2025

Natallie Rocha , The New York Times; Welcome to Your Job Interview. Your Interviewer Is A.I.

"Job seekers across the country are starting to encounter faceless voices and avatars backed by A.I. in their interviews. These autonomous interviewers are part of a wave of artificial intelligence known as “agentic A.I.,” where A.I. agents are directed to act on their own to generate real-time conversations and build on responses."

Saturday, July 5, 2025

Two Courts Rule On Generative AI and Fair Use — One Gets It Right; Electronic Frontier Foundation (EFF), June 26, 2025

 TORI NOBLE, Electronic Frontier Foundation (EFF); Two Courts Rule On Generative AI and Fair Use — One Gets It Right

 "Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey."

Tuesday, July 1, 2025

The problems with California’s pending AI copyright legislation; Brookings, June 30, 2025

 , Brookings; The problems with California’s pending AI copyright legislation

 "California’s pending bill, AB-412, is a well-intentioned but problematic approach to addressing artificial intelligence (AI) and copyright currently moving through the state’s legislature. If enacted into law, it would undermine innovation in generative AI (GenAI) not only in California but also nationally, as it would impose onerous requirements on both in-state and out-of-state developers that make GenAI models available in California. 

The extraordinary capabilities of GenAI are made possible by the use of extremely large sets of training data that often include copyrighted content. AB-412 arose from the very reasonable concerns that rights owners have in understanding when and how their content is being used for building GenAI models. But the bill imposes a set of unduly burdensome and unworkable obligations on GenAI developers. It also favors large rights owners, which will be better equipped than small rights owners to pursue the litigation contemplated by the bill."