Showing posts with label Sam Altman. Show all posts
Showing posts with label Sam Altman. Show all posts

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Sunday, August 4, 2024

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid; The Observer via The Guardian, August 3, 2024

Gary Marcus, The Observer via The Guardian; OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.


We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

 

honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

 

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

 

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate"."

Thursday, July 25, 2024

Who will control the future of AI?; The Washington Post, July 25, 2024

, The Washington Post; Who will control the future of AI?

"Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?"

Saturday, June 29, 2024

The Voices of A.I. Are Telling Us a Lot; The New York Times, June 28, 2024

 Amanda Hess, The New York Times; The Voices of A.I. Are Telling Us a Lot

"Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management."

Monday, November 6, 2023

OpenAI offers to pay for ChatGPT customers’ copyright lawsuits; The Guardian, November 6, 2023

 , The Guardian; OpenAI offers to pay for ChatGPT customers’ copyright lawsuits

"Rather than remove copyrighted material from ChatGPT’s training dataset, the chatbot’s creator is offering to cover its clients’ legal costs for copyright infringement suits.

OpenAI CEO Sam Altman said on Monday: “We can defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API.” The compensation offer, which OpenAI is calling Copyright Shield, applies to users of the business tier, ChatGPT Enterprise, and to developers using ChatGPT’s application programming interface. Users of the free version of ChatGPT or ChatGPT+ were not included.

OpenAI is not the first to offer such legal protection, though as the creator of the wildly popular ChatGPT, which Altman said has 100 million weekly users, it is a heavyweight player in the industry. Google, Microsoft and Amazon have made similar offers to users of their generative AI software. Getty Images, Shutterstock and Adobe have extended similar financial liability protection for their image-making software."