Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Monday, April 27, 2026

Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics; The Christian Science Monitor, April 27, 2026

 , The Christian Science Monitor; Musk’s lawsuit against OpenAI seen as a ‘test case’ for AI ethics

"A dispute between ChatGPT’s parent company, OpenAI, and one of the company’s founders – billionaire and tech entrepreneur Elon Musk – will play out in a federal court in Oakland, California, beginning April 27. 

Mr. Musk, who left the company in 2018, is suing OpenAI, claiming its leaders manipulated him into thinking he was contributing money to a nonprofit. He wants the company returned to its nonprofit status and seeks monetary compensation. 

OpenAI says Mr. Musk, who has since raised billions through the launch of his own for-profit company xAI, is misrepresenting facts to gain a competitive edge."

Saturday, April 25, 2026

'Too Dangerous to Release' Is Becoming AI's New Normal; Time, April 24, 2026

Nikita Ostrovsky, Time; 'Too Dangerous to Release' Is Becoming AI's New Normal

 "On April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” 

The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. 

It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”

Who decides? 

The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them."

OpenAI's Sam Altman writes apology to community of Tumbler Ridge; CBC News, April 24, 2026

Andrew Kurjata , CBC News; OpenAI's Sam Altman writes apology to community of Tumbler Ridge

"Sam Altman, the CEO of OpenAI, has written a letter of apology to the community of Tumbler Ridge for failing to alert RCMP about the account of the Tumbler Ridge shooter.

The company shared the letter with the local news website Tumbler RidgeLines, which published it in full. Its authenticity was confirmed by a spokesperson for OpenAI...

Altman committed to authoring an apology after meeting with B.C. Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka at the beginning of March, but said he wanted to take some time before doing so in order to give the community the opportunity to "grieve in their own time."

He also acknowledged that his company should have alerted law enforcement about the account of the shooter, which was flagged for problematic activity in advance of the tragedy but was not escalated to alerting authorities in Canada...

Altman's company is being sued by one Tumbler Ridge family, who alleges the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

Apology 'necessary' but 'grossly insufficient': Eby

Eby also shared the letter on social media, writing "the apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."

A statement from the District of Tumbler Ridge released Friday afternoon acknowledged that Altman's letter "may evoke a range of emotions, and we encourage everyone to take the time and space they need.""

Friday, April 24, 2026

Sam Altman Wants to Know Whether You’re Human; The Atlantic, April 24, 2026

Will Gottsegen , The Atlantic; Sam Altman Wants to Know Whether You’re Human

And he has a way to prove it.

"As the CEO of OpenAI and the chairman of Tools for Humanity, Altman has a financial interest both in the products that create these dangers and in the ones that guard against them."

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.; The New York Times, April 24, 2026

 Meaghan Tobin and , The New York Times; DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I.

"DeepSeek released its models as open source, which means others can freely use and modify them. By contrast, OpenAI and Anthropic kept their leading models proprietary. The episode demonstrated that an open-source system could perform almost as well as closed versions. In the months that followed, Chinese firms released dozens of other open-source models. By the end of 2025, these models made up a significant share of global A.I. usage.

On Friday, DeepSeek released a preview of V4, its long-awaited follow-up model, which it intends to open source. The new model excels at writing computer code, an increasingly important skill for leading A.I. systems. It significantly outperformed every other open-source system at generating code, according to tests from Vals AI, a company that tracks the performance of A.I. technologies.

DeepSeek released its new model just days after Moonshot AI, another Chinese start-up, introduced its latest open-source model, Kimi 2.6. While these systems trail the coding capabilities of the leading U.S. models from Anthropic and OpenAI, the gap is narrowing.

The implications are meaningful. Using A.I. to write code is faster and frees up human programmers to focus on bigger issues. It also means people can use DeepSeek’s latest release to power A.I. agents, which are personal digital assistants that can use other software applications on behalf of office workers, including spreadsheets, online calendars and email services."

Wednesday, April 22, 2026

When AI advice enters a murder case; Politico, April 22, 2026

 Aaron Man, Politico; When AI advice enters a murder case

"Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI on Tuesday following a mass shooting at Florida State University that resulted in two deaths last year. The attorney general stated during a press conference that ChatGPT “offered significant advice” to the suspected gunman, Phoenix Ikner, based on a preliminary review by prosecutors.

“If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said.

The prospect of OpenAI facing criminal liability raises new questions about whether developers should be held responsible for a chatbot’s potential role in such a tragedy.

Legal scholars told DFD that, compared with prior civil cases, imposing criminal liability on the company would be a much steeper uphill battle. A key challenge, according to them, would be proving OpenAI acted with criminal intent."

Friday, April 17, 2026

AI Startups Have These Copyright Lawyers on Speed Dial; Bloomberg Law, April 16, 2026

 David Schultz , Bloomberg Law; AI Startups Have These Copyright Lawyers on Speed Dial

"Something similar connects many of the top attorneys representing the artificial intelligence industry in its most consequential battles: their resumes.

The common thread is Durie Tangri. More than 50 attorneys from the defunct Bay Area intellectual property firm are at the center of epic Silicon Valley copyright fights, just more than three years after Morrison Foerster acquired the practice...

“Tech copyright is a small world,” said Joseph Gratz, one of the alums at Morrison.

The Durie Tangri alums have benefited from the demand in tech copyright law, said Gratz, who has appeared in court defending OpenAI in almost two dozen federal lawsuits...

One of the marquee cases Durie Tangri took on was the decade-long copyright infringement suit over Google’s book digitization. Sonal Mehta, a Durie Tangri alum who is now at WilmerHale, said the boutique relished taking on matters that ventured into uncharted territory.

“We weren’t afraid to be operating in gray areas or to be looking at where the law hadn’t fully developed,” Mehta said. “We didn’t need to feel like every argument had to be something that was a cookie cutter argument that had already been made and won 20 times before.”"

Tuesday, April 14, 2026

Sam Altman home attacks spark concern over AI-motivated violence; Axios, April 14, 2026

Nadia Lopez, Axios; Sam Altman home attacks spark concern over AI-motivated violence

"The big picture: These incidents come amid heightened tension around AI's rapid development, with public anxiety over its political and economic implications rising even as companies continue to push the technology forward.

Threat level: AI is being cast in increasingly existential terms, including by its own creators. Warnings over the chaos the technology could unleash have become part of mainstream discourse, alongside promises of sweeping economic transformation.

This dual promise of disruption and progress has helped elevate AI into one of the most consequential policy debates in the world, but also one of the most emotionally charged."

Agency in the Age of AI; Time, April 14, 2026

John Palfrey , Time; Agency in the Age of AI

"OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold. 

These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?"

Monday, April 13, 2026

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

, The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

  and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Thursday, April 9, 2026

Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’; New York Daily News via Chicago Tribune, April 9, 2026

  , New York Daily News via Chicago Tribune; Judge slams key OpenAI witness in copyright infringement case for ‘hazy recollections’

"An unimpressed Manhattan judge ordered a corporate representative for OpenAI to undergo a second deposition after finding he failed to answer “even the simplest questions” the first time around about what the company has described as efforts to limit chatbots from stealing writers’ work.

​Magistrate Judge Ona Wang, in a sharply-worded 11-page order Tuesday, said OpenAI had been put on notice that the company’s purported expert on plagiarism John Vincent “Vinnie” Monaco was woefully underprepared for his January deposition, ordering him to submit to 3.5 more hours of questioning that took place Wednesday.

​In granting a motion from the Chicago Tribune, New York Times and other news outlets suing OpenAI to compel the additional testimony, Wang deferred ruling on a request for sanctions, saying it would depend on how Monaco fared in his do-over. She said she may issue fines or recommend some of his answers be deemed as admissions.

​OpenAI has previously said that Monaco has more knowledge than any of its engineers about Project Giraffe, an internal operation which the company claims is designed to develop ways to limit its learning language models, or LLMs, from inadvertently regurgitating copyrighted works — the issue at the core of the ongoing Manhattan Federal Court lawsuit."

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

 Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Saturday, April 4, 2026

AI agents are scrambling power users' brains; Axios, April 4, 2026

Megan Morrone, Axios ; AI agents are scrambling power users' brains

"A growing number of software developers say AI coding tools are frying their brains. 

The big picture: The most popular agentic AI systems have triggered something that looks a lot like addiction among some of tech's highest performers."

Friday, March 27, 2026

OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens; Futurism, March 26, 2026

 , Futurism; OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens

"The company’s panicked executives have made it abundantly clear that distracting “side quests” must be abandoned, while doubling down on both enterprise and coding. The purported goal is to stuff all of its offerings into a single “super app,” taking a page out of xAI CEO Elon Musk’s playbook.

These aren’t empty words by OpenAI execs. First, news emerged this week that the company is killing its disastrous Sora video AI slop app, lighting what was supposed to be a groundbreaking $1 billion deal with Disney on fire.

Now, the company is axing its spicy “adult mode” chatbot, as the Financial Timesreports, once again highlighting how much pressure the company is under as competitors aren’t just catching up, but snatching up precious paying customers from right under its nose."

Thursday, March 26, 2026

OpenAI shutters AI video generator Sora in abrupt announcement; The Guardian, March 24, 2026

, The Guardian; OpenAI shutters AI video generator Sora in abrupt announcement

Tech firm ‘says goodbye’ to Sora, made publicly available in 2024, just six months after its launch of a stand-alone app

"In an abrupt announcement on Tuesday, OpenAI said it was “saying goodbye” to its AI video generator Sora. The move comes just six months after the company’s splashy launch of a stand-alone app with which people could make and share hyper-realistic AI videos in a scrolling social feed."

Saturday, March 21, 2026

The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue; Fortune, March 21, 2026

 , Fortune; The dictionaries are suing OpenAI for ‘massive’ copyright infringement, and say ChatGPT is starving publishers of revenue

"In a filing submitted to the Southern District of New York, the companies accuse OpenAI of cannibalizing the traffic and ad revenue that publishers depend on to survive. “ChatGPT starves web publishers, like [the] Plaintiffs, of revenue,” the complaint reads. Where a traditional search engine sends users to a publisher’s website, Britannica and Merriam-Webster allege ChatGPT instead absorbs the content and delivers a polished answer. It also alleges the AI company fed its LLM with researched and fact-checked work of the companies’ hundreds of human writers and editors...

In an apt example, the complaint describes a prompt asking “How does Merriam-Webster define plagiarize?” to which the model reportedly responded with a definition identical to the one found in the Merriam-Webster dictionary. The complaint adds that the dictionary has been registered with the U.S. Copyright Office."