Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Tuesday, May 5, 2026

Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender; The Guardian, May 4, 2026

 , The Guardian; Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender

"An acclaimed Canadian fiddle player has launched a $1.5m civil lawsuit against Google, alleging that the online giant defamed him by falsely identifying him as a sex offender in an AI-generated summary of his life and career.

Ashley MacIsaac, a three-time Juno award-winning musician, filed the claim in the Ontario superior court of justice, asserting that Google was liable for the “foreseeable republication” of its AI-generated Overview feature, which previously published defamatory claims that he had been convicted of multiple criminal offences, including the sexual assault of a woman, internet luring involving a child with the intention of sexual assaulting the child, and assault causing bodily harm.

Google’s AI Overview also wrongly stated that MacIsaac had been listed on the national sex offender registry for life, the lawsuit says."

Friday, May 1, 2026

Pentagon Makes Deals With A.I. Companies to Expand Classified Work; The New York Times, May 1, 2026

Julian E. Barnes and , The New York Times ; Pentagon Makes Deals With A.I. Companies to Expand Classified Work

"The Pentagon announced on Friday that it had reached deals with some of the technology industry’s biggest companies in an effort to expand the military’s artificial intelligence capabilities and increase the number of firms authorized to be on classified networks.

The companies, according to the Defense Department, agreed to allow the Pentagon to employ their technology for “any lawful use,” a standard resisted by Anthropic, which was initially the only artificial intelligence model available on classified markets.

The Pentagon had previously confirmed deals with Elon Musk’s xAI, OpenAI and Google. In addition the Pentagon said it had reached deals with Amazon Web Services, Microsoft, Nvidia and Reflection AI, a start-up."

Tuesday, April 21, 2026

YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive); The Hollywood Reporter, April 21, 2026

 Alex Weprin, The Hollywood Reporter ; YouTube Opens Up AI Deepfake Detection Tool to All of Hollywood (Exclusive)

The tool, which requires a celebrity to upload their likeness, will flag potentially infringing content — like, say, a star playing a role in fan-generated movie — for a possible takedown.

"Executives at the Google-owned platform tell The Hollywood Reporter that their proprietary deepfake detection tool, years in the making, is now open to anyone at high risk of having their likeness abused: Actors, athletes, creators and musicians, whether they have a YouTube channel or not, can sign up to identify and request removal of deepfakes on its platform...

The timing of the tool’s expansion comes as the industry grapples with the continued growth of deepfakes across platforms, and with video models quickly turning hypothetical worst-case scenarios into reality for many stars."

Monday, April 20, 2026

Google Starts Scanning All Your Photos As New Update Goes Live; Forbes, April 20, 2026

Zak Doffman, Forbes; Google Starts Scanning All Your Photos As New Update Goes Live

"Take a moment to think before you dive in. That’s the best advice for Google Photos users, as the company confirms its latest update can scan all your photos to “use actual images of you and your loved ones” in AI image generation. That means Gemini seeing who you know and what you do. You likely have tens or hundreds of thousands of photos. They’re all exposed if you update.

We’re talking Personal Intelligence, Google’s latest AI upgrade path which lets users opt-in to connecting Google apps to Gemini...

This is the latest iteration in the ongoing battle between convenience and privacy playing out on our phones and computers."

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

 Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Saturday, February 7, 2026

Ex-Google engineer found guilty of stealing AI secrets for China; Axios, February 2, 2026

 Rebecca Falconer, Axios; Ex-Google engineer found guilty of stealing AI secrets for China

"A former Google engineer was found guilty of economic espionage and theft of confidential AI technology for the benefit of China's government, the FBI said Monday.

Why it matters: Intelligence and defense officials have long warned of increased efforts by Beijing and others to obtain U.S. intellectual property and use AI against American interests.


State of play: A federal jury in San Francisco convicted Linwei Ding, also known as Leon Ding, 38, of seven counts of economic espionage and seven counts of theft of trade secrets, per an FBI post on X Monday."

Friday, February 6, 2026

Publishers Strike Back Against Google in Infringement Suit; Publishers Weekly, February 6, 2026

 Jim Milliot , Publishers Weekly; Publishers Strike Back Against Google in Infringement Suit

"The Association of American Publishers continued its fight this week to allow two of its members, Hachette Book Group and Cengage, to join a class action copyright infringement lawsuit against Google and its generative AI product Gemini. The lawsuit was first brought by a group of illustrators and writers in 2023.

In mid-January the AAP filed its first motion to allow the two publishers to take part in the lawsuit that is now before Judge Eumi K. Lee in the U.S. District Court for the Northern District of California. Earlier this week the AAP filed its reply to Google’s motion asking the court to block AAP’s request.

At the core of Google’s argument is the notion that the publishers should have asked to intervene sooner, as well as the assertion that publishers have no interest in the case because they don’t own authors works.

In its response, AAP argues that it was only when the case reached class certification that the publishers’ interests became clear. The new filing also rebuts Google’s other claim that publishers’ don’t own any rights.

“Google’s professed misunderstanding of ownership exemplifies exactly the kind of value that Proposed Intervenors bring to the case,” the AAP stated, arguing that both HBG and Cengage own certain rights to the works in question and that “scores” of other publishers will be impacted by the litigation."

Sunday, January 18, 2026

Publishers seek to join lawsuit against Google over AI training; Reuters, January 15, 2026

 , Reuters; Publishers seek to join lawsuit against Google over AI training

"Publishers Hachette Book Group and Cengage Group asked a California federal court on Thursday for permission to intervene in a proposed class action lawsuit against Google over the alleged misuse of copyrighted material used to train its artificial intelligence systems.

The publishers said in their proposed complaint that the tech company "engaged in one of the most prolific infringements of copyrighted materials in history" to build its AI capabilities, copying content from Hachette books and Cengage textbooks without permission...

The lawsuit currently involves groups of visual artists and authors who sued Google for allegedly misusing their work to train its generative AI systems. The case is one of many high-stakes lawsuits brought by artists, authors, music labels and other copyright owners against tech companies over their AI training."

Google Engineer Disputes AI Secrets in China Espionage Trial; Bloomberg Law, January 12, 2026

 Isaiah Poritz, Bloomberg Law; Google Engineer Disputes AI Secrets in China Espionage Trial

"Former Google LLC engineer Linwei Ding on the first day of his criminal trial pushed back on allegations that he stole over 100 valuable AI trade secrets from the tech giant to start a business in China, arguing that the documents he copied don’t meet the legal definition of a trade secret."

Monday, January 5, 2026

AI copyright battles enter pivotal year as US courts weigh fair use; Reuters, January 5, 2026

 , Reuters; AI copyright battles enter pivotal year as US courts weigh fair use

"The sprawling legal fight over tech companies' vast copying of copyrighted material to train their artificial intelligence systems could be entering a decisive phase in 2026.

After a string of fresh lawsuits and a landmark settlement in 2025, the new year promises to bring a wave of rulings that could define how U.S. copyright law applies to generative AI. At stake is whether companies like OpenAI, Google and Meta can rely on the legal doctrine of fair use to shield themselves from liability – or if they must reimburse copyright holders, which could cost billions."

Monday, December 22, 2025

OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers; Bloomberg Law, December 22, 2025

Annelise Levy, Bloomberg Law; OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers

"Writers including Pulitzer Prize-winning journalist John Carreyrou filed a copyright lawsuit accusing six AI giants of using pirated copies of their books to train large language models.

The complaint, filed Monday in the US District Court for the Northern District of California, claims Anthropic PBC, Google LLCOpenAI Inc.Meta Platforms Inc., xAI Corp., and Perplexity AI Inc. committed a “deliberate act of theft.”

It is the first copyright lawsuit against xAI over its training process, and the first suit brought by authors against Perplexity...

Carreyrou is among the authors who opted out of a $1.5 billion class-action settlement with Anthropic."

Wednesday, December 10, 2025

Friday, November 14, 2025

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

, The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Friday, October 17, 2025

Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis; Yale School of Medicine, October 15, 2025

Naedine Hazell, Yale School of Medicine; Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis

"Google and Yale researchers have developed a more “advanced and capable” AI model for analyzing single-cell RNA data using large language models that is expected to “lead to new insights and potential biological discoveries.”

“This announcement marks a milestone for AI in science,” Google announced.

On social media and in comments, scientists and developers applauded the model—which Google released Oct. 15—as the much-needed bridge to make single-cell data accessible, or interpretable, by AI. 

Many scientists, including cancer researchers focusing on improving the outcomes of immunotherapies, have homed in on single-cell data to understand the mechanisms of disease that either protect, or thwart, its growth. But their efforts have been slowed by the size and complexity of data...

“Just as AlphaFold transformed how we think about proteins, we’re now approaching that moment for cellular biology. We can finally begin to simulate how real human cells behave—in context, in silico," van Dijk explained, following Google's model release. "This is where AI stops being just an analysis tool and starts becoming a model system for biology itself.”

An example of discoveries that could be revealed using this large-scale model with improved predictive power was tested by Yale and Google researchers prior to the release of the model. The findings will be shared in an forthcoming paper.

On Wednesday, the scaled-up model, Cell2Sentence-Scale 27B was released. The blog post concluded: “The open model and its resources are available today for the research community. We invite you to explore these tools, build on our work and help us continue to translate the language of life.”"

Wednesday, July 16, 2025

The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025

 , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies

"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.

In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:

  • “critical national security challenges;”
  • “joint mission essential tasks in our warfighting domain;”
  • “DoD use cases.”

The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Thursday, October 31, 2024

Election Falsehoods Take Off on YouTube as It Looks the Other Way; The New York Times, October 31, 2024

 , The New York Times; Election Falsehoods Take Off on YouTube as It Looks the Other Way

"From May through August, researchers at Media Matters tracked 30 of the most popular YouTube channels they identified as persistently spreading election misinformation, to analyze the narratives they shared in the run-up to November’s election.

The 30 conservative channels posted 286 videos containing election misinformation, which racked up more than 47 million views. YouTube generated revenue from more than a third of those videos by placing ads before or during them, researchers found. Some commentators also made money from those videos and other monetized features available to members of the YouTube Partner Program...

Mr. Giuliani, the former New York mayor, posted more false electoral claims to YouTube than any other major commentator in the research group, the analysis concluded...

YouTube, which is owned by Google, has prided itself on connecting viewers with “authoritative information” about elections. But in this presidential contest, it acted as a megaphone for conspiracy theories."

Wednesday, October 16, 2024

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

 , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Sunday, September 29, 2024

AI could be an existential threat to publishers – that’s why Mumsnet is fighting back; The Guardian, September 28, 2024

 , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back

"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.

At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...

If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...

I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.

But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."

Friday, August 30, 2024

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."