Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, January 21, 2026

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026

 and  , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss

"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.

While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...

Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.

Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...

Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

 National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

 Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Sunday, January 18, 2026

Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’; Variety, January 14, 2026

 Todd Spangler, Variety ; Matthew McConaughey Trademarks ‘Alright, Alright, Alright!’ and Other IP as Legal Protections Against ‘AI Misuse’

"Matthew McConaughey’s lawyers want you to know that using AI to replicate the actor’s famous catchphrase is not “alright, alright, alright.”

Attorneys for entertainment law firm Yorn Levine representing McConaughey have secured eight trademarks from the U.S. Patent and Trademark Office over the last several months for their client, which they said is aimed at protecting his voice and likeness from unauthorized AI misuse."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

  , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."

Library offering two hybrid workshops on AI issues; University of Pittsburgh, University Times, January 16, 2026

 University of Pittsburgh, University Times; Library offering two hybrid workshops on AI issues

"Next week the University Library System will host two hybrid AI workshops, which are open to all faculty, staff and students.

Both workshops will be held in Hillman Library’s K. Leroy Irvis Reading Room and will be available online.

Navigating Pitt's AI Resources for Research & Learning: 4-5 p.m. Jan. 21. In this workshop, participants will learn about all the AI tools available to the Pitt community and what their strengths are when it comes to research and learning. The workshop will focus on identifying the appropriate AI tools, describing their strengths and weaknesses for specific learning needs, and developing a plan for using the tools effectively. Register here.

Creating a Personal Research & Learning Assistant: Writing Effective Prompts: 4-5 p.m. Jan. 22. Anyone can use an AI tool, but maximizing its potential for personalized learning takes some skills and forethought. If you have been using Claude or Gemini to support your research or learning and are interested in getting better results faster, this workshop is for you. Attend this session to learn strategies to write effective prompts which will help you both ideate on your topic of interest and increase the likelihood of generating useful responses. We will explore numerous frameworks for crafting prompts, including making use of personas, context, and references. Register here."

Friday, January 16, 2026

Microsoft Shuts Down Library, Replaces It With AI; Futurism, January 16, 2026

 , Futurism; Microsoft Shuts Down Library, Replaces It With AI

"Does Microsoft hate books more, or its own workers? It’s hard to say, because The Verge reports that the multitrillion dollar giant is gutting its employee library and cutting down on digital subscriptions in favor of pursuing what’s internally described as an “AI-powered learning experience” — whatever in Clippy’s name that’s supposed to mean."

Microsoft is closing its employee library and cutting back on subscriptions; The Verge, January 15, 2026

Tom Warren, The Verge; Microsoft is closing its employee library and cutting back on subscriptions

"Microsoft is closing its physical library of books and cutting employee subscriptions. It's part of cost cutting and a move to AI."

Wednesday, January 14, 2026

Britain seeks 'reset' in copyright battle between AI and creators; Reuters, January 13, 2026

 Reuters; Britain seeks 'reset' in copyright battle between AI and creators

"British technology minister Liz Kendall said on Tuesday the government was seeking a "reset" on plans to overhaul copyright rules to accommodate artificial intelligence, pledging to protect creators while unlocking AI's economic potential.

Creative industries worldwide are grappling with legal and ethical challenges posed by AI systems that generate original content after being trained on popular works, often without compensating the original creators."

Saturday, January 3, 2026

University of Rochester's incoming head librarian looks to adapt to AI; WXXI, January 2, 2026

Noelle E. C. Evans, WXXI; University of Rochester's incoming head librarian looks to adapt to AI

"A new head librarian at the University of Rochester is preparing to take on a growing challenge — adapting to generative artificial intelligence.

Tim McGeary takes on the position of university librarian and dean of libraries on March 1. He is currently associate librarian for digital strategies and technology at Duke University, where he’s witnessed AI challenges firsthand...

“(The university’s digital repository) was dealing with an unforeseen consequence of its own success: By making (university) research freely available to anyone, it had actually made it less accessible to everyone,” Jamie Washington wrote for the campus online news source, UDaily.

That balance between open access and protecting students, researchers and publishers from potential harms from AI is a space of major disruption, McGeary said.

"If they're doing this to us, we have open systems, what are they possibly doing to those partners we have in the publishing space?" McGeary asked. "We've already seen some of the larger AI companies have to be in court because they have acquired content in ways that are not legal.”

In the past 25 years, he said he’s seen how university libraries have evolved with changing technology; they've had to reinvent how they serve research and scholarship. So in a way, this is another iteration of those challenges, he said."

Tuesday, December 30, 2025

AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer; The Guardian, December 30, 2025

, The Guardian; AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer

"A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.

Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.

Bengio, chair of a leading international AI safety study, said the growing perception that chatbots were becoming conscious was “going to drive bad decisions”.

The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans."

An Anti-A.I. Movement Is Coming. Which Party Will Lead It?; The New York Times, December 29, 2025

MICHELLE GOLDBERG, The New York Times ; An Anti-A.I. Movement Is Coming. Which Party Will Lead It?

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”

A.I. obviously has beneficial uses, especially medical ones; it may, for example, be better than humans at identifying localized cancers from medical imagery. But the list of things it is ruining is long."

What country stars really think about that AI-generated country ‘hit’; The Washington Post, December 28, 2025

 , The Washington Post; What country stars really think about that AI-generated country ‘hit’

"“Walk My Walk,” a track from an act called Breaking Rust, landed at No. 1 on the magazine’s country digital song sales list. It didn’t take long for journalists to realize that Breaking Rust didn’t appear to be human; Billboard referred to it as a “AI-powered country act,” and one of several “AI artists” on its charts.

“Can listeners tell the difference?” CNN wondered, taking the question to people on the street. “Does it matter?”

It’s an issue that has been roiling the music industry lately, even after years of media consolidation and format changes that had already made it harder for real singers and songwriters to earn a living.

The alarm bells grew louder in 2025 as artificial intelligence became more pervasive, but the Breaking Rust episode was a particular focal point for Nashville anxieties; Tennessee was the first state to sign into law the Elvis Act, which protects singers from their voices being copied by AI."

Monday, December 29, 2025

Year in Review: The U.S. Copyright Office; Library of Congress Blogs: Copyright Creativity at Work, December 29, 2025

 George Thuronyi , Library of Congress Blogs: Copyright Creativity at Work; Year in Review: The U.S. Copyright Office

"Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office

As the year draws to a close, I am pleased to recognize an impressive slate of accomplishments at the U.S. Copyright Office. Despite some challenges, including a lengthy government shutdown, the Office continued to produce high-quality work and reliable service to the public—from policy analyses to technology updates; efficient registration, recordation and deposit; and education and outreach. I am grateful for the opportunity to lead such a skilled and dedicated staff.

A central policy focus of the year was further work on the Office’s comprehensive artificial intelligence initiative. In January, we published Part 2 of our report, Copyright and Artificial Intelligence, addressing the copyrightability of works generated using AI. In May, we released a pre-publication version of Part 3, addressing the ingestion of copyrighted works for generative AI training.

A particularly exciting development has been in the area of IT modernization: the launch of more components of the Enterprise Copyright System (ECS). The Office engaged in a successful limited pilot with members of the public of both the eDeposit upload functionality and our most-used registration form, the Standard Application. The development teams are implementing the feedback received, and work has begun on the first ECS group registration application. We also launched the ECS licensing component, which improves the Office’s internal capabilities in administering section 111 of the Copyright Act.

Another ECS component, the new and improved Copyright Public Records System (CPRS), replaced our legacy system as the official Office record in June. More and more pre-1978 historical public records have been digitized and published, with 19,135 copyright record books now available online, amounting to more than 72 percent of the total collection.

The Office also made strides in administration and public service. Our small claims court, the Copyright Claims Board (CCB), completed its third full year, offering a more accessible option for resolving copyright disputes below a certain monetary value. The Office published a rule expediting the process for obtaining a certification of a final determination and initiated a study of the CCB’s operations to be delivered to Congress in 2026.

Our public information and education programs continued to grow. The Office hosted or participated in 190 events and speaking engagements and assisted the public, in both English and Spanish, with responses to 247,484 inquiries in-person and by phone, email, and other communications. We launched a new Registration Toolkit and a Copyright for Kids activity sheet. In September, the Office hosted the International Copyright Institute, our premier weeklong training event for foreign copyright officials, coproduced with the World Intellectual Property Organization (WIPO).

This fall, we responded to a Congressional request on issues relating to performance rights organizations (PROs). And earlier in December, we announced a new group registration option for two-dimensional artwork, responding to the needs of visual artists. The Office also has taken forward the periodic review, mandated by the Music Modernization Act, of the mechanical licensing collective (MLC) and digital licensee coordinator (DLC), to be completed in 2026.

On the litigation front, the Office worked with the Department of Justice to develop and articulate positions in copyright-related cases. One major win was an appellate decision affirming the Office’s rejection of an application to register a work claimed to be produced entirely by artificial intelligence. The D.C. Circuit agreed with our view that human authorship is required for copyright protection.

Collaboration with and advising other federal agencies was again a key part of our interagency work in the international arena. This included participating in WIPO meetings on copyright and contributing to the U.S. Trade Representative’s annual Special 301 Report.

Concurrent with all of this activity, the Office’s provision of our regular services continued apace. Despite furloughs during the six-week lapse in appropriations, we issued 415,780 registrations and recorded 12,310 documents containing 5,704,306 works in fiscal year 2025. All the while, we maintained historically low processing times. We also received and transferred 503,389 copyright deposits, worth more than $57.8 million, to Library of Congress collections.

The Copyright Office remains committed to advancing copyright law and policy and supporting stakeholders in the creation and use of works of authorship. The work of the past year demonstrates the value of a resilient institution, grounded in expertise and public service. We look forward to further achievements in 2026."

Sunday, December 28, 2025

8 Ways A.I. Affected Pop Culture in 2025; The New York Times, December 28, 2025

, The New York Times; 8 Ways A.I. Affected Pop Culture in 2025

"A.I.-generated artists topping iTunes and Billboard charts. Podcast hosts speaking fluently for hours in languages they do not know. Dead celebrities brought back to life and filling up social-media feeds.

For years, artificial intelligence was a disruption on the horizon. In 2025 it arrived in tangible ways, big and small. Here are a few examples of how A.I. intersected with pop culture in 2025."

A 1 Percent Solution to the Looming A.I. Job Apocalypse; The New York Times, December 27, 2025

Sal Khan, The New York Times; A 1 Percent Solution to the Looming A.I. Job Apocalypse

"On my way to meet a friend in Silicon Valley a few weeks ago, I passed three self-driving Waymos gliding through traffic. These cars are everywhere now, moving as if they’ve been part of the landscape forever. When I arrived, the wonder of those futuristic cars gave way to a far more troubling glimpse of what lies ahead.

My friend told me that a huge call center in the Philippines — a center his venture capital firm had invested in — had just deployed A.I. agents capable of replacing 80 percent of its work force. The tone in his voice wasn’t triumphant. It was filled with deep discomfort. He knew that thousands of workers depended on those jobs to pay for food, rent and medicine. But they were disappearing overnight. Even worse, over the next few years this could happen across the entire Filipino call center industry, which directly makes up 7 percent to 10 percent of the nation’s G.D.P.

That conversation stayed with me. What’s happening in the Philippines is connected to what’s happening on the streets of San Francisco; Phoenix; Austin, Texas; Atlanta; and Los Angeles — the cities where driverless cars now operate.

I believe artificial intelligence will displace workers at a scale many people don’t yet realize."

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

"Why your AI companion is not your friend"; Financial Times, December 27, 2025

Financial Times ; "Why your AI companion is not your friend"

The year in AI and culture; NPR, December 28, 2025

"From the advent of AI actress Tilly Norwood to major music labels making deals with AI companies, 2025 has been a watershed year for AI and culture."