Showing posts with label Big Tech. Show all posts
Showing posts with label Big Tech. Show all posts

Sunday, March 15, 2026

Social Media Isn’t Just Speech. It’s Also a Defective, Hazardous Product.; The New York Times, March 14, 2026

, The New York Times ; Social Media Isn’t Just Speech. It’s Also a Defective, Hazardous Product.

"For two decades now, social media companies have been virtually untouchable, profitably floating above accusations that they normalize propaganda, addict children and degrade our character. Legally and politically, platforms like Facebook, Instagram and YouTube have been protected by an idea that they and others have promoted: that they are not just innovative technologies but also speech platforms, so that imposing any limits on them would amount to both censorship and a drag on technological progress.

That protection is finally starting to weaken, thanks to a growing realization that social media is also a matter of public health. Seen this way, social media appears as something less newfangled and more familiar: a defective, hazardous product. The current trial of Meta’s Instagram and Google’s YouTube in Los Angeles Superior Court, in which a 20-year-old woman has accused the platforms of designing their products in ways that harmed her mental and physical health, is the clearest sign of this shift.

The case, in which closing arguments were made on Thursday, is the first of many lawsuits brought by thousands of young people, school districts and state attorneys general against companies like Meta, Google, Snap and TikTok. The plaintiffs in these cases do not accuse the companies merely of serving up bad content to young people; they argue that the very design of social media is intentionally engineered to create compulsions and habits of overuse, regardless of the content provided."

Saturday, March 14, 2026

The Guardian view on changes to copyright laws: authors should be protected over big tech; The Guardian, March 13, 2026

  , The Guardian; The Guardian view on changes to copyright laws: authors should be protected over big tech

"In a scene that might have come from a dystopian novel, books were being stamped with “Human Authored” logos at this week’s London Book Fair. The Society of Authors described its labelling scheme as “an important sticking plaster to protect and promote human creativity in lieu of AI labelled content in the marketplace”.

Visitors to the fair were also being given copies of Don’t Steal This Book, an anthology of about 10,000 writers including Nobel laureate Kazuo Ishiguro, Malorie Blackman, Jeanette Winterson and Richard Osman, in which the pages are completely blank. The back cover states: “The UK government must not legalise book theft to benefit AI companies.” The message is clear: writers have had enough.

The fair comes the week before the government is due to deliver its progress report on AI and copyright, after proposals for a relaxation of existing laws caused outrage last year. Philippa Gregory, the novelist, described the plans for an “opt-out” policy, which puts the onus on writers to refuse permission for their work to be trawled, as akin to putting a sign on your front door asking burglars to pass by...

House of Lords report published last week lays out two possible futures: one in which the UK “becomes a world-leading home for responsible, legalised artificial intelligence (AI) development” and another in which it continues “to drift towards tacit acceptance of large-scale, unlicensed use of creative content”. One scenario protects UK artists, the other benefits global tech companies. To avoid a world of empty content, the choice is clear."

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Wednesday, March 11, 2026

Democrats ask what happened to millions earmarked for Trump’s library; The Washington Post, March 11, 2026

 , The Washington Post; Democrats ask what happened to millions earmarked for Trump’s library

ABC, Meta, Paramount and X reportedly agreed to pay at least $63 million in settlements with the president. The original fund was dissolved last year.

"Congressional Democrats are opening a probe into millions of dollars private companies pledged to President Donald Trump’s planned presidential library, asking what happened to the money after the original fund was dissolved last year.

Sens. Elizabeth Warren (Massachusetts) and Richard Blumenthal (Connecticut) and Rep. Melanie Stansbury (New Mexico) wrote Monday to the leaders of ABC, Meta, Paramount and X, requesting information about the terms of their agreements and the status of the funds they pledged to hand over to the president’s representatives. The letters were shared with The Washington Post."

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

, The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Tuesday, February 24, 2026

‘It’s the most urgent public health issue’: Dr Rangan Chatterjee on screen time, mental health – and banning social media until 18; The Guardian, February 16, 2026

Emine Saner, The Guardian; ‘It’s the most urgent public health issue’: Dr Rangan Chatterjee on screen time, mental health – and banning social media until 18 

"Chatterjee believes that “the widespread adoption of screens into our children’s lives is the most urgent public health issue of our time”. He was never very political, he says. He is the affable host of a successful health podcast, Feel Better, Live More, and his books strike an optimistic, inspiring tone – but on this issue he is passionate, his frustration obvious. “I think successive governments have been very weak here, and they are failing a whole generation of children. I think they’ve already failed a generation of children.”"

Wednesday, February 18, 2026

Mark Zuckerberg Takes the Stand in Landmark Social Media Addiction Trial; The New York Times, February 18, 2026

 , The New York Times; Mark Zuckerberg Takes the Stand in Landmark Social Media Addiction Trial

"Mr. Zuckerberg’s appearance in court — his first time testifying about child safety in front of a jury — was highly anticipated. Meta, which owns Instagram and Facebook and has more than 3.5 billion users, has come under fire as one of the biggest providers of platforms for teenagers. Parents, as well as tech policy and child safety groups have accused the company of hooking young people on its apps and causing mental health issues that have led to anxiety, depression, eating disorders and self-harm...

In internal documents that surfaced in some of the lawsuits, Mr. Zuckerberg and other Meta leaders repeatedly played down their platforms’ risks to young people, while rejecting employee pleas to bolster youth guardrails and hire additional staff...

K.G.M.’s lawyer, Mark Lanier, said during his opening statement this month that Instagram and YouTube’s apps were built like “digital casinos” that profited off addictive behavior. He pointed to internal documents from Meta and Google, which owns YouTube, comparing their technology to gambling, tobacco and drug use. In a 2015 memo, Mr. Zuckerberg encouraged executives to prioritize increasing the time that teenagers spend on Meta’s apps.

Meta said in its opening statement that K.G.M.’s mental health issues were caused by familial abuse and turmoil. The company presented medical records to show that social media addiction was not a focus of her therapy sessions."

Tuesday, February 10, 2026

Meta and YouTube Created ‘Digital Casinos,’ Lawyers Argue in Landmark Trial; The New York Times, February 9, 2026

Eli Tan and , The New York Times ; Meta and YouTube Created ‘Digital Casinos,’ Lawyers Argue in Landmark Trial

"The trial in the California Superior Court of Los Angeles is the first in a series of landmark cases against Meta, Snap, TikTok and YouTube that test a novel legal theory arguing that tech can be as harmful as casinos and cigarettes.

Teenagers, school districts and states have filed thousands of lawsuits accusing the social media titans of designing platforms that encourage excessive use. Drawing inspiration from a legal playbook used against Big Tobacco last century, lawyers argue that features like infinite scroll, auto video play and algorithmic recommendations have led to compulsive social media use.

The cases pose some of the most significant legal threats to Meta, Snap, TikTok and YouTube, potentially opening them up to new liabilities for users’ well-being. A win for the plaintiffs could prompt more lawsuits and lead to monetary damages, as well as change how social media is designed."

Thursday, February 5, 2026

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI; The Guardian, February 5, 2026

Anuj Behal, The Guardian ; ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI


[Kip Currier: The largely unaddressed plight of content moderators became more real for me after reading this haunting 9/9/24 piece in the Washington Post, "I quit my job as a content moderator. I can never go back to who I was before."

As mentioned in the graphic article's byline, content moderator Alberto Cuadra spoke with journalist Beatrix Lockwood. Maya Scarpa's illustrations poignantly give life to Alberto Cuadra's first-hand experiences and ongoing impacts from the content moderation he performed for an unnamed tech company. I talk about Cuadra's experiences and the ethical issues of content moderation, social media, and AI in my Ethics, Information, and Technology book.]


[Excerpt]

"Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.

On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm.

This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of a workforce often described as “ghost workers”.

“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her...

“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.

A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted."

Tuesday, January 27, 2026

Social Media Giants Face Landmark Legal Tests on Child Safety; The New York Times, January 27, 2026

, The New York Times ; Social Media Giants Face Landmark Legal Tests on Child Safety

"Are social media apps addictive like cigarettes? Are these sites defective products?

Those are the claims that Meta, Snap, TikTok and YouTube will face this year in a series of landmark trials. Teenagers, school districts and states have filed thousands of lawsuits accusing the social media titans of designing platforms that encouraged excessive use by millions of young Americans, leading to personal injury and other harms.

On Tuesday, the first of these bellwether cases is scheduled to start with jury selection in California Superior Court of Los Angeles County. A now-20-year-old Californian identified by the initials K.G.M. filed the lawsuit in 2023, claiming she became addicted to the social media sites as a child and experienced anxiety, depression and body-image issues as a result.

The cases pose one of the most significant legal threats to Meta, Snap, TikTok and YouTube, potentially opening them up to new liabilities for users’ well-being. Drawing inspiration from a legal playbook used against Big Tobacco last century, lawyers plan to use the argument that the companies created addictive products.

A win could open the door to more lawsuits from millions of social media users. It could also lead to huge monetary damages and changes to social media sites’ designs."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

  , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."

Thursday, October 30, 2025

From CBS to TikTok, US media are falling to Trump’s allies. This is how democracy crumbles; The Guardian, October 29, 2025

, The Guardian; From CBS to TikTok, US media are falling to Trump’s allies. This is how democracy crumbles

"Democracy may be dying in the US. Whether the patient receives emergency treatment in time will determine whether the condition becomes terminal. Before Donald Trump’s return to the presidency, I warned of “Orbánisation” – in reference to Hungary’s authoritarian leader Viktor Orbán. There, democracy was not extinguished by firing squads or the mass imprisonment of dissidents, but by slow attrition. The electoral system was warped, civil society was targeted and pro-Orbán moguls quietly absorbed the media.

Nine months on, and Orbánisation is in full bloom across the Atlantic. Billionaire Larry Ellison, the Oracle co-founder, and his filmmaker son, David, have become blunt instruments in this process. Trump boasts they are “friends of mine – they’re big supporters of mine”. Larry Ellison, second only to Elon Musk as the world’s richest man, has poured tens of millions into Republican coffers...

US democracy has always been heavily flawed. It is so rigged in favour of wealthy elites that a detailed academic study back in 2014 found that the political system is rigged in favour of what the economic elites want. Yet because, unlike Hungary, the US has no history of dictatorship, with a system of supposed checks and balances, some felt it could never succumb to tyranny. Such complacency has collided with brutal reality. In just nine months, the US has been dragged towards an authoritarian abyss. A warning: Trump has 39 months left in office."

Wednesday, October 29, 2025

Big Tech Makes Cal State Its A.I. Training Ground; The New York Times, October 26, 2025

, The New York Times ; Big Tech Makes Cal State Its A.I. Training Ground

"Cal State, the largest U.S. university system with 460,000 students, recently embarked on a public-private campaign — with corporate titans including Amazon, OpenAI and Nvidia — to position the school as the nation’s “first and largest A.I.-empowered” university. One central goal is to make generative A.I. tools, which can produce humanlike texts and images, available across the school’s 22 campuses. Cal State also wants to embed chatbots in teaching and learning, and prepare students for “increasingly A.I.-driven”careers.

As part of the effort, the university is paying OpenAI $16.9 million to provide ChatGPT Edu, the company’s tool for schools, to more than half a million students and staff — which OpenAI heralded as the world’s largest rollout of ChatGPT to date. Cal State also set up an A.I. committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students’ career opportunities."

Saturday, October 4, 2025

How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation; The Guardian, October 4, 2025

Interview by  , The Guardian; How to live a good life in difficult times: Yuval Noah Harari, Rory Stewart and Maria Ressa in conversation


[Kip Currier: One of the most insightful, nuanced, enlightening pieces I've read amongst thousands this year. I've followed and admired the work and wisdom of Maria Ressa and Yuval Noah Harari but wasn't familiar with UK academic and politician Rory Stewart who makes interesting contributions to this joint interview. They all individually and collectively identify in clear-eyed fashion what's going on in the world today, what the stakes are, and what each of us can do to try to make some kind of positive difference.

I shared this article with others in my network and encourage you to do the same, so these beneficial, thought-provoking perspectives can be read by as many as possible.]


[Excerpt]

"What happens when an internationally bestselling historian, a Nobel peace prize-winning journalist and a former politician get together to discuss the state of the world, and where we’re heading? Yuval Noah Harari is an Israeli medieval and military historian best known for his panoramic surveys of human history, including Sapiens, Homo Deus and, most recently, Nexus: A Brief History of Information Networks from the Stone Age to AI. Maria Ressa, joint winner of the Nobel peace prize, is a Filipino and American journalist who co-founded the news website Rappler. And Rory Stewart is a British academic and former Conservative MP, writer and co-host of The Rest Is Politics podcast. Their conversation ranged over the rise of AI, the crisis in democracy and the prospect of a Trump-Putin wedding, but began by considering a question central to all of their work: how to live a good life in an increasingly fragmented and fragile world?...

YNH I think that more people need to realise that we have to do the hard work ourselves. There is a tendency to assume that we can rely on reality to do the job for us. That if there are people who talk nonsense, who support illogical policies, who ignore the facts, sooner or later, reality will wreak vengeance on them. And this is not the way that history works.

So if you want the truth, and you want reality to win, each of us has to do some of the hard work ourselves: choose one thing and focus on that and hope that other people will also do their share. That way we avoid the extremes of despair."

Monday, September 29, 2025

I Sued Anthropic, and the Unthinkable Happened; The New York Times, September 29, 2025

 , The New York Times; I Sued Anthropic, and the Unthinkable Happened

"In August 2024, I became one of three named plaintiffs leading a class-action lawsuit against the A.I. company Anthropic for pirating my books and hundreds of thousands of other books to train its A.I. The fight felt daunting, almost preposterous: me — a queer, female thriller writer — versus a company now worth $183 billion?

Thanks to the relentless work of everyone on my legal team, the unthinkable happened: Anthropic agreed to pay authors and publishers $1.5 billion in the largest copyright settlement in history. A federal judge preliminarily approved the agreement last week.

This settlement sends a clear message to the Big Tech companies splashing generative A.I. over every app and page and program: You are not above the law. And it should signal to consumers everywhere that A.I. isn’t an unstoppable tsunami about to overwhelm us. Now is the time for ordinary Americans to recognize our agency and act to put in place the guardrails we want.

The settlement isn’t perfect. It’s absurd that it took an army of lawyers to demonstrate what any 10-year-old knows is true: Thou shalt not steal. At around $3,000 per work, shared by the author and publisher, the damages are far from life-changing (and, some argue, a slap on the wrist for a company flush with cash). I also disagree with the judge’s ruling that, had Anthropic acquired the books legally, training its chatbot on them would have been “fair use.” I write my novels to engage human minds — not to empower an algorithm to mimic my voice and spit out commodity knockoffs to compete directly against my originals in the marketplace, nor to make that algorithm’s creators unfathomably wealthy and powerful.

But as my fellow plaintiff Kirk Wallace Johnson put it, this is “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of A.I.” Anthropic will destroy its trove of illegally downloaded books; its competitors should take heed to get out of the business of piracy as well. Dozens of A.I. copyright lawsuits have been filed against OpenAI, Microsoft and other companies, led in part by Sylvia Day, Jonathan Franzen, David Baldacci, John Grisham, Stacy Schiff and George R. R. Martin. (The New York Times has also brought a suit against OpenAI and Microsoft.)

Though a settlement isn’t legal precedent, Bartz v. Anthropic may serve as a test case for other A.I. lawsuits, the first domino to fall in an industry whose “move fast, break things” modus operandi led to large-scale theft. Among the plaintiffs of other cases are voice actors, visual artists, record labels, YouTubers, media companies and stock-photo libraries, diverse stakeholders who’ve watched Big Tech encroach on their territory with little regard for copyright law...

Now the book publishing industry has sent a message to all A.I. companies: Our intellectual property isn’t yours for the taking, and you cannot act with impunity. This settlement is an opening gambit in a critical battle that will be waged for years to come."

Saturday, June 21, 2025

Conservative groups demand Congress protect intellectual property from patent abuse; Washington Examiner, June 18, 2025

 Ross O'Keefe , Washington Examiner; Conservative groups demand Congress protect intellectual property from patent abuse

"A collection of 28 conservative groups is urging Republican Congress members to pass the PERA, PREVAIL, and RESTORE acts — all aimed at patent protection — as Chinese influence permeates U.S.intellectual property...

The PREVAIL Act, or Promoting and Respecting Economically Vital American Innovation Leadership, was introduced by Sen. Chris Coons (D-DE) in the last Congress and aims to “invest in inventors in the United States, maintain the United States as the leading innovation economy in the world, and protect the property rights of the inventors that grow the economy of the United States.”

The PERA Act, or the Patent Eligibility Restoration, was introduced by Sen. Thom Tillis (R-NC) also in the last Congress and aims to restore patent eligibility to several fields. Lastly, the RESTORE Act, or Realizing Engineering, Science, and Technology Opportunities by Restoring Exclusive Patent Rights, works to give patent owners the right to a “rebuttable presumption that the court should grant a permanent injunction with respect to that infringing conduct” if a court finds that there was an infringement of a right secured by patent.

All three acts could work as pro-patent freedom legislation, possibly helping U.S. intellectual property owners fight back against Big Tech and China."

Tuesday, May 20, 2025

The AI and Copyright Issues Dividing Trump’s Court; Jacobin, May 19, 2025

 DAVID MOSCROP , Jacobin; The AI and Copyright Issues Dividing Trump’s Court

"As many have pointed out, the copyright-AI battle is not only a central struggle within the Trump administration; it is also a broader conflict over who controls intellectual property and to what end. For decades, corporations have abused copyright to unreasonably extend coverage periods and impoverish the public domain. Their goal: maximizing both control over IP and profits. But AI firms aren’t interested in reforming that system. They’re not looking to open access or enrich the commons — they just want training data. And in fighting for it, they may end up reshaping copyright law in ways that outlast this administration.

As Nguyen notes, after the Register of Copyrights, Shira Perlmutter, was turfed by DOGE-aligned officials, Trump antitrust adviser Mike Davis posted to Truth Social: “Now tech bros are going to steal creators’ copyrights for AI profits. . . . This is 100 percent unacceptable.” Trump reposted it. That’s the shape of the struggle: MAGA populists, who see their own content as sacred property, are up against a tech elite that views all content as extractable fuel."