Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Thursday, October 31, 2024

Election Falsehoods Take Off on YouTube as It Looks the Other Way; The New York Times, October 31, 2024

 , The New York Times; Election Falsehoods Take Off on YouTube as It Looks the Other Way

"From May through August, researchers at Media Matters tracked 30 of the most popular YouTube channels they identified as persistently spreading election misinformation, to analyze the narratives they shared in the run-up to November’s election.

The 30 conservative channels posted 286 videos containing election misinformation, which racked up more than 47 million views. YouTube generated revenue from more than a third of those videos by placing ads before or during them, researchers found. Some commentators also made money from those videos and other monetized features available to members of the YouTube Partner Program...

Mr. Giuliani, the former New York mayor, posted more false electoral claims to YouTube than any other major commentator in the research group, the analysis concluded...

YouTube, which is owned by Google, has prided itself on connecting viewers with “authoritative information” about elections. But in this presidential contest, it acted as a megaphone for conspiracy theories."

Wednesday, October 16, 2024

His daughter was murdered. Then she reappeared as an AI chatbot.; The Washington Post, October 15, 2024

 , The Washington Post; His daughter was murdered. Then she reappeared as an AI chatbot.

"Jennifer’s name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character’s website, according to a screenshot of her chatbot’s now-deleted profile.

Crecente, who has spent the years since his daughter’s death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family’s permission. Experts said the incident raises concerns about the AI industry’s ability — or willingness — to shield users from the potential harms of a service that can deal in troves of sensitive personal information...

The company’s terms of service prevent users from impersonating any person or entity...

AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners...

Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn’t the first grieving parent to have their child’s information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children’s families, The Post reported last year.

“We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents,” Claypool said. “They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed.”

Sunday, September 29, 2024

AI could be an existential threat to publishers – that’s why Mumsnet is fighting back; The Guardian, September 28, 2024

 , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back

"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.

At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...

If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...

I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.

But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."

Friday, August 30, 2024

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."

Saturday, June 29, 2024

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

Tuesday, June 4, 2024

Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

  Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

Saturday, May 18, 2024

Reddit shares jump after OpenAI ChatGPT deal; BBC, May 17, 2024

  João da Silva, BBC; Reddit shares jump after OpenAI ChatGPT deal

"Shares in Reddit have jumped more than 10% after the firm said it had struck a partnership deal with artificial intelligence (AI) start-up OpenAI.

Under the agreement, the company behind the ChatGPT chatbot will get access to Reddit content, while it will also bring AI-powered features to the social media platform...

Meanwhile, Google announced a partnership in February which allows the technology giant to access Reddit data to train its AI models.

Both in the European Union and US, there are questions around whether it is copyright infringement to train AI tools on such content, or whether it falls under fair use and "temporary copying" exceptions."

Wednesday, March 20, 2024

Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini; TechCrunch, March 20, 2024

 Natasha LomasRomain Dillet , TechCrunch; Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini

"In a never-ending saga between Google and France’s competition authority over copyright protections for news snippets, the Autorité de la Concurrence announced a €250 million fine against the tech giant Wednesday (around $270 million at today’s exchange rate).

According to the competition watchdog, Google disregarded some of its previous commitments with news publishers. But the decision is especially notable because it drops something else that’s bang up-to-date — by latching onto Google’s use of news publishers’ content to train its generative AI model Bard/Gemini.

The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content."

Thursday, October 19, 2023

AI is learning from stolen intellectual property. It needs to stop.; The Washington Post, October 19, 2023

William D. Cohan , The Washington Post; AI is learning from stolen intellectual property. It needs to stop.

"The other day someone sent me the searchable database published by Atlantic magazine of more than 191,000 e-books that have been used to train the generative AI systems being developed by Meta, Bloomberg and others. It turns out that four of my seven books are in the data set, called Books3. Whoa.

Not only did I not give permission for my books to be used to generate AI products, but I also wasn’t even consulted about it. I had no idea this was happening. Neither did my publishers, Penguin Random House (for three of the books) and Macmillan (for the other one). Neither my publishers nor I were compensated for use of my intellectual property. Books3 just scraped the content away for free, with Meta et al. profiting merrily along the way. And Books3 is just one of many pirated collections being used for this purpose...

This is wholly unacceptable behavior. Our books are copyrighted material, not free fodder for wealthy companies to use as they see fit, without permission or compensation. Many, many hours of serious research, creative angst and plain old hard work go into writing and publishing a book, and few writers are compensated like professional athletes, Hollywood actors or Wall Street investment bankers. Stealing our intellectual property hurts." 

Thursday, July 13, 2023

A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.; The New York Times, July 11, 2023

The Ezra Klein Show, The New York Times; A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.

"Since the release of ChatGPT, huge amounts of attention and funding have been directed toward chatbots. These A.I. systems are trained on copious amounts of human-generated data and designed to predict the next word in a given sentence. They are hilarious and eerie and at times dangerous.

But what if, instead of building A.I. systems that mimic humans, we built those systems to solve some of the most vexing problems facing humanity?"

Wednesday, July 12, 2023

Google hit with class-action lawsuit over AI data scraping; Reuters, July 11, 2023

 , Reuters ; Google hit with class-action lawsuit over AI data scraping

"Alphabet's Google (GOOGL.O) was accused in a proposed class action lawsuit on Tuesday of misusing vast amounts of personal information and copyrighted material to train its artificial intelligence systems.

The complaint, filed in San Francisco federal court by eight individuals seeking to represent millions of internet users and copyright holders, said Google's unauthorized scraping of data from websites violated their privacy and property rights."

Wednesday, March 23, 2022

The ex-Google researcher staring down Big Tech; Politico, March 18, 2022

BRAKKTON BOOKER , Politico; The ex-Google researcher staring down Big Tech

"THE RECAST:  President Biden ran on a platform promising to root out inequities in federal agencies and programs. Has his administration done enough to tackle the issue of discriminatory AI?

GEBRU: I'm glad to see that some initiatives have started. I like that the Office Of Science And Technology Policy (OSTP), for instance, is filled with people I respect, like Alondra Nelson, who is now its head.

My biggest comment on this is that a lot of tech companies — all tech companies — actually, don't have to do any sort of test to prove that they're not putting out harmful products...

The burden of proof always seems to be on us...The burden of proof should be on these tech companies."

Tuesday, February 15, 2022

Opinion: A lawsuit against Google points out a much bigger privacy problem; The Washington Post, February 14, 2022

Editorial Board, The Washington Post; Opinion: A lawsuit against Google points out a much bigger privacy problem

"The phenomenon the recent suits describe, after all, is not particular to Google but rather endemic to almost the entirety of the Web: Companies get to set all the rules, as long as they run those rules by consumers in convoluted terms of service that even those capable of decoding the legalistic language rarely bother to read. Other mechanisms for notice and consent, such as opt-outs and opt-ins, create similar problems. Control for the consumer is mostly an illusion. The federal privacy law the country has sorely needed for decades would replace this old regime with meaningful limitations on what data companies can collect and in what contexts, so that the burden would be on them not to violate the reasonable expectations of their users, rather than placing the burden on the users to spell out what information they will and will not allow the tech firms to have.

The question shouldn’t be whether companies gather unnecessary amounts of sensitive information about their users sneakily — it should be whether companies amass these troves at all. Until Congress ensures that’s true for the whole country, Americans will be clicking through policies and prompts that do little to protect them."

Saturday, February 5, 2022

Two members of Google’s Ethical AI group leave to join Timnit Gebru’s nonprofit; The Verge, February 2, 2022

Emma Roth, The Verge; Two members of Google’s Ethical AI group leave to join Timnit Gebru’s nonprofit

"Two members of Google’s Ethical AI group have announced their departures from the company, according to a report from Bloomberg. Senior researcher Alex Hanna, and software engineer Dylan Baker, will join Timnit Gebru’s nonprofit research institute, Distributed AI Research (DAIR)...

In a post announcing her resignation on Medium, Hanna criticizes the “toxic” work environment at Google, and draws attention to a lack of representation of Black women at the company."

Thursday, October 7, 2021

AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face; Emerging Tech Brew, October 4, 2021

Hayden Field, Emerging Tech Brew ; AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face

"Hugging Face wants to bring these powerful tools to more people. Its mission: Help companies build, train, and deploy AI models—specifically natural language processing (NLP) systems—via its open-source tools, like Transformers and Datasets. It also offers pretrained models available for download and customization.

So what does it mean to play a part in “democratizing” these powerful NLP tools? We chatted with Mitchell about the split from Google, her plans for her new role, and her near-future predictions for responsible AI."

Friday, May 28, 2021

Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says; CNBC, May 26, 2021

Emily DeCiccio, CNBC; Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says

"Privacy laws in the U.S. need to be updated, especially after Google struck a deal with a major hospital chain, medical ethics expert Arthur Kaplan said Wednesday.

“Now we’ve got electronic medical records, huge volumes of data, and this is like asking a navigation system from a World War I airplane to navigate us up to the space shuttle,” Kaplan, a professor at New York University’s Grossman School of Medicine, told “The News with Shepard Smith.” “We’ve got to update our privacy protection and our informed consent requirements.”

On Wednesday, Google’s cloud unit and hospital chain HCA Healthcare announced a deal that — according to The Wall Street Journal — gives Google access to patient records. The tech giant said it will use that to make algorithms to monitor patients and help doctors make better decisions."

Wednesday, August 19, 2020

Self-Driving to Federal Prison: The Trade Secret Theft Saga of Anthony Levandowski Continues; Lexology, August 13, 2020

Seyfarth Shaw LLP - Robert Milligan and Darren W. DummitSelf-Driving to Federal Prison: The Trade Secret Theft Saga of Anthony Levandowski Continues

"Judge Aslup, while steadfastly respectful of Levandowski as a good person and as a brilliant man who the world would learn a lot listening to, nevertheless found prison time to be the best available deterrent to engineers and employees privy to trade secrets worth billions of dollars to competitors: “You’re giving the green light to every future engineer to steal trade secrets,” he told Levandowski’s attorneys. “Prison time is the answer to that.” To further underscore the importance of deterring similar behavior in the high stakes tech world, Judge Aslup required Levandowski to give the aforementioned public speeches describing how he went to prison."

Thursday, July 30, 2020

Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight; The Guardian, July 30, 2020

, The Guardian; Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight

"As David Cicilline put it: “These companies as they exist today have monopoly power. Some need to be broken up, all need to be properly regulated and held accountable.” And then he quoted Louis Brandeis, who said, “We can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can’t have both.”"

Friday, May 1, 2020

How to find copyright-free images (and avoiding a stock photo subscription); TNW, April 29, 2020

 , TNW; How to find copyright-free images (and avoiding a stock photo subscription)

"If you search for any term and head to the Images section in Google, you’ll instantly find thousands of images. There’s one issue, though: Some of them might be copyrighted and you might be putting yourself (or your employer) at risk. Fortunately, you can filter images by usage rights, which will help you avoid that...

Here are a couple of our favorite free stock photo sites:
If you’re looking for copyright-free PNG cutouts, you should check out PNGPlayIcon8, and PNGimg.
Even though a lot of these images are free to use without any attribution, you can support the creators by giving them credit, which in turn gives their work more exposure. You might not have the resources to purchase their images, but someone else might be interested in hiring them. Crediting them for their work helps with that.
You get to save some money by avoiding buying a Shutterstock subscription, they get free exposure. It’s a win-win."