Showing posts with label Sam Altman. Show all posts
Showing posts with label Sam Altman. Show all posts

Saturday, October 11, 2025

AI videos of dead celebrities are horrifying many of their families; The Washington Post, October 11, 2025

, The Washington Post; AI videos of dead celebrities are horrifying many of their families


[Kip Currier: OpenAI CEO Sam Altman's reckless actions in releasing Sora 2.0 without guardrails and accountability mechanisms exemplify Big Tech's ongoing Zuckerberg-ian "Move Fast and Break Things" modus operandi in the AI Age. 

Altman also had to recently walk back his ill-conceived directive that copyright holders would need to opt-out of having their copyrighted works used as AI training data (yet again!), rather than the burden being on OpenAI to secure their opt-ins through licensing.

To learn more about potential further copyright-related questionable conduct by OpenAI, read this 10/10/25 Bloomberg Law article:  OpenAI Risks Billions as Court Weighs Privilege in Copyright Row]

[Excerpt]

"OpenAI said the text-to-video tool would depict real people only with their consent. But it exempted “historical figures” from these limits during its launch last week, allowing anyone to make fake videos resurrecting public figures, including activists, celebrities and political leaders — and leaving some of their relatives horrified.

“It is deeply disrespectful and hurtful to see my father’s image used in such a cavalier and insensitive manner when he dedicated his life to truth,” Shabazz, whose father was assassinated in front of her in 1965 when she was 2, told The Washington Post. She questioned why the developers were not acting “with the same morality, conscience, and care … that they’d want for their own families.”

Sora’s videos have sparked agitation and disgust from many of the depicted celebrities’ loved ones, including actor Robin Williams’s daughter, Zelda Williams, who pleaded in an Instagram post recently for people to “stop sending me AI videos of dad.”"

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

 , Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Friday, October 10, 2025

You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out; Gizmodo, October 8, 2025

 , Gizmodo; You Can’t Use Copyrighted Characters in OpenAI’s Sora Anymore and People Are Freaking Out

 "OpenAI may be able to appease copyright holders by shifting its Sora policies, but it’s now pissed off its users. As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can’t make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was “the only reason this app was so fun.” Another claimed, “Moral policing and leftist ideology are destroying America’s AI industry.” So, you know, it seems like they’re handling this well."

It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025

 , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?

"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.

Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."

Wednesday, October 8, 2025

OpenAI wasn’t expecting Sora’s copyright drama; The Verge, October 8, 2025

 Hayden Field , The Verge; OpenAI wasn’t expecting Sora’s copyright drama

"When OpenAI released its new AI-generated video app Sora last week, it launched with an opt-out policy for copyright holders — media companies would need to expressly indicate they didn’t want their AI-generated characters running rampant on the app. But after days of Nazi SpongeBob, criminal Pikachu, and Sora-philosophizing Rick and Morty, OpenAI CEO Sam Altman announced the company would reverse course and “let rightsholders decide how to proceed.”

In response to a question about why OpenAI changed its policy, Altman said that it came from speaking with stakeholders and suggested he hadn’t expected the outcry.

“I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses,” Altman said. “It felt more different to images than people expected.”

Sunday, October 5, 2025

OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'; PC Gamer, October 5, 2025

 , PC Gamer ; OpenAI hastily retreats from gung-ho copyright policy after embarrassing Sora video output like AI Sam Altman surrounded by Pokémon saying 'I hope Nintendo doesn't sue us'

"This video is just one of many examples, but you'll have a much harder time finding Sora-generated videos containing Marvel or Disney characters. As reported by Automaton, Sora appears to be refusing prompts containing references to American IP, but Japanese IP didn't seem to be getting the same treatment over the past week.

Japanese lawyer and House of Representatives member Akihisa Shiozaki called for action to protect creatives in a post on X (formerly Twitter), which has been translated by Automaton: "I’ve tried out [Sora 2] myself, but I felt that it poses a serious legal and political problem. We need to take immediate action if we want to protect leading Japanese creators and the domestic content industry, and help them further develop. (I wonder why Disney and Marvel characters can’t be displayed).""

Saturday, October 4, 2025

Sam Altman says Sora will add ‘granular,’ opt-in copyright controls; TechCrunch, October 4, 2025

 Anthony Ha , TechCrunch; Sam Altman says Sora will add ‘granular,’ opt-in copyright controls

"OpenAI may be reversing course on how it approaches copyright and intellectual property in its new video app Sora.

Prior to Sora’s launch this week, The Wall Street Journal reported that OpenAI had been telling Hollywood studios and agencies that they needed to explicitly opt out if they didn’t want their IP to be included in Sora-generated videos.

Despite being invite-only, the app quickly climbed to the top of the App Store charts. Sora’s most distinctive feature may be its “cameos,” where users can upload their biometric data to see their digital likeness featured in AI-generated videos.

At the same time, users also seem to delight in flouting copyright laws by creating videos with popular, studio-owned characters. In some cases, those characters might even criticize the company’s approach to copyright, for example in videos where Pikachu and SpongeBob interact with deepfakes of OpenAI CEO Sam Altman.

In a blog post published Friday, Altman said the company is already planning two changes to Sora, first by giving copyright holders “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”"

Sunday, December 8, 2024

Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’; CNBC, December 8, 2024

 Megan Sauer , CNBC; Google CEO: AI development is finally slowing down—‘the low-hanging fruit is gone’;

"Now, with the industry’s competitive landscape somewhat established — multiple big tech companies, including Google, have competing models — it’ll take time for another technological breakthrough to shock the AI industry into hyper-speed development again, Pichai said at the New York Times’ DealBook Summit last week.

“I think the progress is going to get harder. When I look at [2025], the low-hanging fruit is gone,” said Pichai, adding: “The hill is steeper ... You’re definitely going to need deeper breakthroughs as we get to the next stage.”...

Some tech CEOs, like Microsoft’s Satya Nadella, agree with Pichai. “Seventy years of the Industrial Revolution, there wasn’t much industry growth, and then it took off ... it’s never going to be linear,” Nadella saidat the Fast Company Innovation Festival 2024 in October.

Others disagree, at least publicly. OpenAI CEO Sam Altman, for example, posted “there is no wall” on social media platform X in November — a response to reports that the recently released ChatGPT-4 was only moderately better than previous models."

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Sunday, August 4, 2024

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid; The Observer via The Guardian, August 3, 2024

Gary Marcus, The Observer via The Guardian; OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.


We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

 

honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

 

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

 

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate"."

Thursday, July 25, 2024

Who will control the future of AI?; The Washington Post, July 25, 2024

, The Washington Post; Who will control the future of AI?

"Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?"

Saturday, June 29, 2024

The Voices of A.I. Are Telling Us a Lot; The New York Times, June 28, 2024

 Amanda Hess, The New York Times; The Voices of A.I. Are Telling Us a Lot

"Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management."

Monday, November 6, 2023

OpenAI offers to pay for ChatGPT customers’ copyright lawsuits; The Guardian, November 6, 2023

 , The Guardian; OpenAI offers to pay for ChatGPT customers’ copyright lawsuits

"Rather than remove copyrighted material from ChatGPT’s training dataset, the chatbot’s creator is offering to cover its clients’ legal costs for copyright infringement suits.

OpenAI CEO Sam Altman said on Monday: “We can defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API.” The compensation offer, which OpenAI is calling Copyright Shield, applies to users of the business tier, ChatGPT Enterprise, and to developers using ChatGPT’s application programming interface. Users of the free version of ChatGPT or ChatGPT+ were not included.

OpenAI is not the first to offer such legal protection, though as the creator of the wildly popular ChatGPT, which Altman said has 100 million weekly users, it is a heavyweight player in the industry. Google, Microsoft and Amazon have made similar offers to users of their generative AI software. Getty Images, Shutterstock and Adobe have extended similar financial liability protection for their image-making software."