Wednesday, September 18, 2024

Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

 Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

This is the 5,000th post since this blog launched almost 14 years ago on October 3, 2010. My first post was about a 10/1/10 New York Times article on When Lawyers Can Peek at Facebook. The sentence I referenced as an excerpt from that story was:


"Could the legal world be moving toward a new set of Miranda warnings: “Anything you say, do — or post on Facebook — can be used against you in a court of law”?"

 

Social Media Revisited: What Can We Learn?


The legal world in 2010 -- much of the world, really -- was grappling with what guardrails and guidelines to implement for the then-emerging technology of social media: guardrails like, delineating the line between lawyers accessing the public-facing social media pages of potential jurors (okay 👍) from  lawyers using trickery to unethically gain access to the private social media pages of possible jurors (not okay 👎), as an excerpt from that Times article distinguishes:


“If I’m researching a case I’ll do Google searches,” said Carl Kaplan, senior appellate counsel at the Center for Appellate Litigation and a lecturer at Columbia Law School. “What’s the difference between that and looking at someone’s Facebook?

“I think it’s good that they’re kind of recognizing reality and seeming to draw a line in the sand between public pages and being sneaky and gaining access to private pages in unethical ways.”

The city bar did specifically note that it would be unethical to become someone’s friend through deception. In fact, the four-page opinion went into great detail in describing a hypothetical example of the improper way to go about becoming someone’s Facebook friend.

 

https://archive.nytimes.com/cityroom.blogs.nytimes.com/2010/10/01/when-lawyers-can-peek-at-facebook/?scp=2&sq=facebook%20ethics&st=cse

 

 

AI for Good, AI for Bad: Guardrails and Guidelines


Any of this sound familiar? It should. In today's AI Age, we're grappling again with what guardrails and guidelines to put in place to protect us from the known and unknown dangers of AI for Bad, while encouraging the beneficial innovations of AI for Good. Back in 2010, too, many of us were still getting up to speed with the novelties, ambiguities -- and the costs and benefits -- of using social media. And a lot of us are doing the same thing right now with AI and Generative AI: brainstorming and writing via chatbots like ChatGPT and Claude, making AI-generated images with DALL-E 3 and Adobe Firefly, using AI to develop groundbreaking medical treatments and make extraordinary scientific discoveries, and much, much more.


In 2024, we know more about social media. We've had more lived experiences with the good, the bad, and sometimes the very ugly potentialities and realities of social media. Yes, social  media has made it possible to connect more easily and widely with others. It's also enabled access to information on scales that in the analog era were unimaginable. But, it's also come with real downsides and harms, such as unwelcome practices like cyberbullying, online hate speech, disinformation, and doxxing. Science, too, is uncovering more about the effects of social media and other technologies on our lives in the 2020's. Research, for example, is providing empirical evidence of the deleterious effects of our technology addictions, particularly on the mental health of children who on average admit to using their smartphones 4-7 hours every day. 

 

What Are the Necessary, Vital Questions?


At these beginning stages of the AI revolution, it is advisable for us to practice some additional mindfulness and reflection on where we've been with technology. And where we are going and want to go. To ask some "lessons learned" and "roads not taken" questions -- the "necessary and vital questions" -- that aren't easily answered, like:


  • Would we as citizens -- mothers, fathers, daughters, sons -- have done anything differently (on micro and/or macro scales) about social media back in 2010?
  • What would we have wanted policymakers, elected leaders, for-profit companies, non-profit agencies, board members and trustees, educators, faith leaders, civil watchdogs, cultural heritage institutions, historically disadvantaged peoples, and other stakeholders to have said or done to better equip our societies to use social media more responsibly, more equitably, and more ethically?
  • What frameworks and protections might we have wanted to devise and embed in our systems to verify that what the social media gatekeepers told us they were doing was actually being done?
  • What legal systems and consequences would we have lobbied for and codified to hold social media owners and content distributors accountable for breaches of the public trust?

 

A Content Moderator's Tale


As I write this post, I'm reminded of a Washington Post 9/9/24 article illustrated with comic book-style images that I read last week, titled ‘I quit my job as a content moderator. I can never go back to who I was before.’ The protagonist in the comic, Alberto Cuadra, is identified as a non-U.S. "former content moderator". Think of content moderators as the "essential workers" of the social media ecosystem, like the essential workers during the COVID-19 pandemic lockdowns who kept our communities running while we were safely "sheltering in place" at home. Content moderators are the unsung online warriors who take jobs with tech companies (e.g. Facebook/Meta, YouTube/Alphabet, Twitter-cum-X, TikTok) to clear out the proverbial social media landmines. They do this by peering at the really icky Internet "stuff", the most depraved creations, the most psychologically injurious content that's posted to social media platforms around the world, in order to render it inaccessible to users and shield us from these digitally hazardous materials.


Back to the content moderator story, after suffering with anxiety and other ills caused by interacting with the disturbing content with which he had made a Faustian bargain for the sake of gainful employment, Alberto Cuadra ultimately decides that he has to leave his job. He does this to reclaim his physical and mental health, despite the unnamed company where he works providing "a certain number of free sessions with a therapist" for any employee working there. Alberto's short but powerfully poignant graphic story, made possible by Washington Post reporter Beatrix Lockwood and illustrator Maya Scarpa, ends with a poignant pronouncement:

 

If I ever have children, I won't let them on any media until they're 18.



 The Case for AI/Gen AI Regulation and Oversight


As always when faced with a new technology (whether it's the 15th century printing press or the 20th century Internet), the disciplines of law, ethics, and policy are playing catch-up with new disruptive technologies: namely, AI and Generative AI. Just as state and city bar associations needed to issue ethics opinions for lawyers on the do's and don'ts of using social media for all types of legal tasks a decade and a half ago, 2024 has seen state bars, and just last month the American Bar Association, publish ethics opinions on what lawyers must do and must not do vis-a-vis the use of AI and Generative AI tools. Lawyers don't really have the luxury of not following such rules if they want to keep their licenses active and stay in good standing with bar associations and clients. Are there not sufficient reasons and incentives now, though, for non-lawyers to also spell out more of the do's and don'ts for AI? To express their voices and have policies created and enforced that protect their interests too? To not have the loudest voices in the room be the tech companies and "special interests" that have the most to gain by not having robust regulatory systems, enforcement mechanisms, and penalties that protect everyone from the bad things that bad people can do with technologies like Generative AI?

 

What Can We Do?


Amidst all of the perils and promises of digital and AI technologies, what can people do who want to see more substantive guardrails and guidelines for AI, before we look back 14 years from now, in 2038, and wonder perhaps what we could have done differently, if AI follows a similar or worse trajectory than social media has? While our communities and societies still have a chance to weigh in on what protections and incentives to have for AI, we can join groups that are advocating for regulatory oversight of AI. One thing we know for sure is that being proactive rather than reactive has many advantages in life. First and foremost, it enables us to have more agency, to say what we want and need, and to work toward achieving those goals and aspirations, rather than reacting to someone else's objectives. To that end, we can tell our elected leaders what we want AI to do and not do.


Admittedly, it can feel overwhelming if we approach an issue like what to do about AI/Gen AI as just one person striving to effect change. Yuval Noah Harari, "big thinker" and author of the new book Nexus: A Brief History of Information Networks from the Stone Age to AI, was asked earlier this week what people can do to influence the ways AI is regulated. Harari responded that there is only so much that fifty people working individually can accomplish. But, he underscored, fifty people working together with a collective purpose can achieve so much more. The take-away then is to find or start groups where we can focus our individual talents and energies, with others who share our values, toward common objectives and visions. Some initiatives are bringing together tech companies and stakeholder groups, such as faith leaders, academic researchers, and content producers, with opportunities for dialogue and greater understanding of perspectives, particularly the interests and voices of those who are often underrepresented. I am participating in one such group and will write more about this in the future.

 

I titled this post Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose? "The more things change, the more they stay the same", posed in question form. I do not know the answer to that right now, and no human -- or AI system -- can answer that for certain either.

 

  • Will our relationships with emerging technologies like AI and Generative AI tip more toward AI for Good or AI for Bad?

 

  • Is the outcome of our potentially AI-augmented futures predestined and inevitable, or subject to our free wills and intrepid determination?

 

That is up to each and all of us.

 

One final point and an update for this post #5,000


A look back at this blog's Fall 2010 posts during its first few months of existence reveals that the ethics, information, and tech issues we were dealing with then are, unsurprisingly, just as pertinent, and in many cases more impactful, now: 


social media, cyberbullying and online humiliation, media ethics, digital citizenship, privacy, cybertracking, surveillance, data collection ethics, plagiarism, research fraud, human subject protections, cybervigilantism, copyright law, free speech, intellectual freedom, censorship, whistleblowers, conspiracy theories, freedom of information, misinformation, transparency, historically marginalized communities, civility, compassion, respect


In the summer of 2025, my Bloomsbury Libraries Unlimited (BLU) textbook, Ethics, Information, and Technology, will be published. The book will include chapters addressing all of the topics and issues above, and much more. I am very pleased to share the book's cover image below. My sincere thanks and gratitude to all of the individuals who have supported this project and journey.




Tuesday, September 17, 2024

New Gift Supports Research in AI, Ethics; Duquesne University, September 16, 2024

Duquesne University ; New Gift Supports Research in AI, Ethics

"Duquesne University’s Carl G. Grefenstette Center for Ethics in Science, Technology, and Law has received a $600,000 gift from the Henry L. Hillman Foundation to support the center’s mission to research ethical issues confronting society in the intersected fields of science, technology and law.

The center leverages the university’s expertise and commitment to the study of ethics to promote partnerships with leading institutions in order to become a transformational force for both Duquesne and the evolving global community. 
 
"We are grateful to the Henry L. Hillman Foundation for this latest generous gift to the Grefenstette Center,” said Duquesne University President Ken Gormley. “This new funding will allow the center to ramp up its work as a pivotal player in the modern field of ethics at a time when technology is changing at a rapid pace and creating new societal challenges.”
 
The Grefenstette Center is the first in the world to bring Catholic, Spiritan values and ideals in an ecumenical framework to grapple with the growing challenges presented by science and technology in society. The center hosts an annual tech ethics symposium, a student-focused hackathon (hacking4humanity) and regularly supports and publishes new research at the intersection of ethics, technology, science, and faith. 


Led by Executive Director Dr. John Slattery, the center was recently named part of the National AI Safety Institute Consortium." 

GRAMBLING STATE RECEIVES $700,000 NSF GRANT TO BROADEN RESEARCH ETHICS EDUCATION; Grambling State News, September 17, 2024

Grambling State News ; GRAMBLING STATE RECEIVES $700,000 NSF GRANT TO BROADEN RESEARCH ETHICS EDUCATION

"Titled “Fostering a Culture of Research Ethics and Integrity: An Institutional Transformational Project,” the project’s overarching goals are to promote a culture of research integrity and build robust research capabilities through more substantial training.

GSU will add to the current requirement for faculty and graduate students to complete specific responsible conduct of research (RCR) training modules via new, comprehensive, university-wide Department of Research Ethics and Integrity (DREI), that will be dedicated to advancing responsible and ethical research practices.

“The grant proposal was submitted to NSF’s Ethical and Responsible Research (ER2) program, which has an aim to support fundamental research about what constitutes or promotes responsible and ethical conduct of research (RECR) — particularly research with human subjects as participants,” Jackson said. “In that, grant programs through NSF are generally very competitive, I wanted there to be little doubt about what we were aiming to do. So, the title explicitly indicated what our project is about”.

“As Grambling State is endeavoring to enhance its research profile, our goal is to strengthen the university’s research infrastructure through this comprehensive effort that will result in a new department — The Department of Research Ethics and Integrity (DREI)”

“The project aims to foster an atmosphere, whereby all persons understand the importance of conducting research ethically and responsibly by providing essential training,” Jackson said. “The implementation of more substantial training will aid in the continued building of robust research capabilities at our university.”

Jackson said that currently, only select members of the university community have to complete limited research ethics training (i.e., one or two online courses; required of faculty who submit grant proposals to particular federal agencies and students conducting dissertation or thesis research)."

US Supreme Court's Roberts hears key Democrat's call for enforceable ethics code; Reuters, September 17, 2024

Nate Raymond, Reuters;  US Supreme Court's Roberts hears key Democrat's call for enforceable ethics code

"The Democratic chair of the U.S. Senate Judiciary Committee on Tuesday argued during a closed-door federal judiciary meeting attended by Chief U.S. Supreme Court Justice John Roberts that the high court's recently-adopted ethical code of conduct falls short and needs a means of enforcement, a person familiar with the matter said.

Democratic Senator Dick Durbin of Illinois spoke as one of several lawmakers invited to attend the semi-annual meeting of the U.S. Judicial Conference, the federal judiciary's top policymaking body, which Roberts heads."

Ohio sheriff asks for residents' addresses with Kamala Harris signs to send illegal immigrants to homes; Fox News, September 16, 2024

Stepheny Price  , Fox News; Ohio sheriff asks for residents' addresses with Kamala Harris signs to send illegal immigrants to homes

"A sheriff in Ohio took to social media to issue a warning to the public that anyone who is showing support for Vice President Kamala Harris's campaign could eventually house some extra guests. 

In a post on his personal campaign page, Portage County Sheriff Bruce Zuchowski appeared to encourage residents to write down the addresses of supporters for Democratic presidential candidate Kamala Harris."

Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture; The Hastings Center, September 12, 2024

 The Hastings Center; Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture

"A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists: Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host Moderator: Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

How Elon Musk Destroyed Twitter; Fresh Air, NPR, September 11, 2024

 Fresh Air, NPR; How Elon Musk Destroyed Twitter

"After buying Twitter in 2022, Elon Musk instituted sweeping changes. He laid off or fired about 75% of the staff –including about half the data scientists. He also ended rules banning hate speech and misinformation. Authors Kate Conger and Ryan Mac recount the takeover in Character Limit."

Digital Image Creation Using AI Risks Copyright Infringement; Bloomberg Law, September 16, 2024

 Brian Moriarty , HBSR, Timothy Meagher . HBSR , Daniel Fleisher , HBSR, Bloomberg Law; Digital Image Creation Using AI Risks Copyright Infringement

"Generative artificial intelligence has radically transformed the world of digital images. Anyone seeking to make a website, a video, or any other visual media can quickly use an AI program to convert their ideas into a new image with help from a few text prompts. 

The image maker can do so at low cost and without the need to hire a digital artist to create the image. Copyright protection may not be available for the new creation (because a computer and not a human created the image). But the image maker may mistakenly believe that the final AI creation doesn’t infringe others’ copyrights because it’s a new image. This isn’t the case."

Trump, Outrage and the Modern Era of Political Violence; The New York Times, September 16, 2024

 , The New York Times; Trump, Outrage and the Modern Era of Political Violence

"Mr. Trump, who as recently as last week’s debate with Ms. Harris blamed Democrats for the shooting at a rally in Butler, Pa., that struck his ear in July, attributed Sunday’s attempt to the president and vice president as well, arguing that the arrested suspect was acting in response to their political attacks.

“He believed the rhetoric of Biden and Harris, and he acted on it,” Mr. Trump told Fox News on Monday. “Their rhetoric is causing me to be shot at, when I am the one who is going to save the country, and they are the ones that are destroying the country — both from the inside and out.”

Even as he complained that the Democrats had made him a target by calling him a threat to democracy, he repeated his own assertion that “these are people that want to destroy our country” and called them “the enemy from within” — certainly language no less provocative than that used about him.

Indeed, within hours, his campaign emailed a list of quotes from Mr. Biden, Ms. Harris and other Democrats attacking Mr. Trump with phrases like “a threat to our democracy” and a “threat to this nation,” without noting that just last week during the debate the former president said “they’re the threat to democracy.”

One of Mr. Trump’s most prominent and vocal supporters went so far as to question why Mr. Biden and Ms. Harris have not been targeted for murder. “And no one is even trying to assassinate Biden/Kamala,” Elon Musk, the billionaire social media owner, wrote online.

Mr. Musk later deleted the post and called it a joke, but the White House pushed back. “Violence should only be condemned, never encouraged or joked about,” said Andrew Bates, a White House spokesman. “This rhetoric is irresponsible.”"

How Vance and Trump’s Lies About Springfield, Ohio, Migrants Continue to Unravel; Intelligencer, New York Magazine;, September 16, 2024

, Intelligencer, New York Magazine; How Vance and Trump’s Lies About Springfield, Ohio, Migrants Continue to Unravel

"What about J.D. Vance and Trump’s purported evidence?

Vance continues to be a leading proponent of numerous incendiary claims about the Haitian migrant community in Springfield. He has promoted the pet-eating rumors, even as they were debunked by state and local officials and media organizations, and over the past week he has added his own fuel to the fire — citing purported firsthand reports his office has received from constituents that support the allegations. When pressed to put forward actual evidence, however, he has not. And during an interview with CNN’s Dana Bash on Sunday morning, he seemed to admit that he and Trump were creating stories for media attention:"

Meta bans RT and other Russian state media networks; Reuters, September 17, 2024

 , Reuters; Meta bans RT and other Russian state media networks

"Facebook owner Meta aid on Monday it was banning RT, Rossiya Segodnya and other Russian state media networks from its platforms, claiming the outlets had used deceptive tactics to carry out covert influence operations online.

The ban, strongly criticised by the Kremlin, marks a sharp escalation in measures by the world's biggest social media company against Russian state media, after years of more limited steps such as blocking the outlets from running ads and reducing the reach of their posts."

White House blasts Elon Musk for X post about Biden and Harris assassination; The Guardian, September 16, 2024

 and agencies, The Guardian; White House blasts Elon Musk for X post about Biden and Harris assassination

"The White House has condemned Elon Musk for tweeting “no one is even trying to assassinate Biden/Kamala” in response to an X user asking “Why they want to kill Donald Trump?”

The president’s office issued a statement Monday criticizing the “irresponsible” post, which was accompanied by an emoji face with a raised eyebrow. The White House said: “Violence should only be condemned, never encouraged or joked about. This rhetoric is irresponsible.” The statement added that there should be “no place for political violence or for any violence ever in our country”.

The Secret Service also said on Monday it was aware of a post by the billionaire on the X social network. Musk, who owns the platform, formerly known as Twitter, made the post after a man suspected of planning to assassinate Donald Trump at his golf course in West Palm Beach was arrested on Sunday.

Musk, himself a Trump supporter, was quickly criticized by X users from the left and right, who said they were concerned his words to his nearly 200m X followers could incite violence against Biden and Harris.

The tech billionaire deleted the post but not before the Secret Service, tasked with protecting current and former presidents, vice-presidents and other notable officials, took notice."

Sunday, September 15, 2024

Tristan Harris & Aza Raskin: We Need Laws Where Companies Are Held Accountable For Harms Created By A.I.; Fox News Radio, September 13, 2024

Fox News Radio; Tristan Harris & Aza Raskin: We Need Laws Where Companies Are Held Accountable For Harms Created By A.I.

"Tristan Harris & Aza Raskin, co-founders of the Center for Humane Technology, joined Brian Kilmeade Show to discuss the dangers of A.I. Tristian and Aza spoke about the mental health risks of A.I. on children. Raskin and Harris compared A.I. to a little kid who after being birthed causes havoc. Harris and Raskin believe that there needs to be laws holding companies accountable for any harms that are created by A.I. the way parents are held accountable when their child causes problems. Aza and Tristan also spoke about how the fundamental uncomfortable truth of AI is that the promise of AI and the peril of AI cannot be separated. Adding, the same technology that allows us to edit our family photos and develop new antibiotics also enables deep fake nudes of teen girls and can create super pandemics"

Pulitzer winner returns to the University of Pittsburgh for forum on Supreme Court ethics; Pittsburgh Post-Gazette, September 12, 2024

ABBY LIPOLD , Pittsburgh Post-Gazette; Pulitzer winner returns to the University of Pittsburgh for forum on Supreme Court ethics

"Mr. Murphy said about half of the readers of his series thought “there was an obvious breach in propriety,” but the other half thought the issue wasn’t worth reporting on."

‘I quit my job as a content moderator. I can never go back to who I was before.’; The Washington Post, September 9, 2024

 , The Washington Post;  ‘I quit my job as a content moderator. I can never go back to who I was before.’

"Alberto Cuadra worked as a content moderator at a video-streaming platform for just under a year, but he saw things he’ll never forget. He watched videos about murders and suicides, animal abuse and child abuse, sexual violence and teenage bullying — all so you didn’t have to. What shows up when you scroll through social media has been filtered through an army of tens of thousands of content moderators, who protect us at the risk of their own mental health.

Warning: The following illustrations contain references to disturbing content."

Florida county restoring dozens of books to school libraries after ‘book ban’ lawsuit; Politico, September 12, 2024

ANDREW ATTERBURY, Politico; Florida county restoring dozens of books to school libraries after ‘book ban’ lawsuit

"A northeast Florida school district this week agreed to restore 36 books that were challenged and previously pulled from campus libraries in a settlement of a federal lawsuit fighting how local officials carried out the state’s policies for shielding students from obscene content.

The settlement reached by Nassau County school officials and a group of parents, students and the authors of the removed children’s book “And Tango Makes Three” marks a significant twist in the ongoing legal battles surrounding Florida’s K-12 book restrictions, which have been derided as “book bans” by opponents. Under the agreement, that book and others such as the “The Bluest Eye” by Toni Morrison and the “The Clan of the Cave Bear” by Jean Auel will once again be available to students after being removed last year...

As part of the agreement, Nassau school officials acknowledged that “And Tango Makes Three” — a kids book about a penguin family at New York’s Central Park Zoo with two dads — contains no “obscene” material and is suitable for students of all ages. This book and the authors are also plaintiffs in a separate lawsuit challenging how the work was removed from school libraries in Escambia County, a case that remains ongoing

These federal lawsuits target how local school boards are enacting policies crafted by Republican lawmakers and Gov. Ron DeSantis’ administration — specifically how parents and others can raise objections about potentially inappropriate books at schools."

Saturday, September 14, 2024

G20 nations agree to join efforts to fight disinformation and set AI guidelines; AP, September, 13, 2024

GABRIELA SÁ PESSOA , AP; G20 nations agree to join efforts to fight disinformation and set AI guidelines

"Group of 20 leaders agreed Friday to join efforts to fight disinformation and set up an agenda on artificial intelligence as their governments struggle against the speed, scale and reach of misinformation and hate speech.

The ministers, who gathered this week in Maceio, the capital of the northeastern state of Alagoas, emphasized in a statement the need for digital platforms to be transparent and “in line with relevant policies and applicable legal frameworks.”

It is the first time in the G20’s history that the group recognizes the problem of disinformation and calls for transparency and accountability from digital platforms, João Brant, secretary for digital policy at the Brazilian presidency, told The Associated Press by phone.

G20 representatives also agreed to establish guidelines for developing artificial intelligence, calling for “ethical, transparent, and accountable use of AI,” with human oversight and compliance with privacy and human rights laws."

'It just exploded': Springfield woman claims she never meant to spark false rumors about Haitians; NBC News, September 13, 2024

Alicia Victoria Lozano , NBC News; 'It just exploded': Springfield woman claims she never meant to spark false rumors about Haitians

"The woman behind an early Facebook post spreading a harmful and baseless claim about Haitian immigrants eating local pets that helped thrust a small Ohio city into the national spotlight says she had no firsthand knowledge of any such incident and is now filled with regret and fear as a result of the ensuing fallout...

Newsguard, a media watchdog that monitors for misinformation online, found that Lee had been among the first people to publish a post to social media about the rumor, screenshots of which circulated online. The neighbor, Kimberly Newton, said she heard about the attack from a third party, NewsGuard reported

Newton told Newsguard that Lee’s Facebook post misstated her story, and that the owner of the missing cat was “an acquaintance of a friend” rather than her daughter’s friend. Newton could not be reached for comment."

Opinion: Fox News cleans up another Trump mess; The Washington Post, September 13, 2024

 , The Washington Post; Fox News cleans up another Trump mess


"It was a case study in how the dominant “news” organ of the right cleans up Trump’s messes. When President Joe Biden had his disastrous debate, liberal outlets and commentators panned the performance and ultimately helped to force him out of the race. But when Trump had what was, objectively, a bad night, Fox News led a movement to claim it didn’t happen.

Sixty-seven million viewers saw an out-of-control Trump claim he won the 2020 election, complain that those who attacked the Capitol on Jan. 6, 2021, were “treated so badly,” argue about his crowd size, assert that he had read that Harris “was not Black” and that Biden “hates her,” admit that he still only has “concepts of a plan” on health care, make odd statements such as “I got involved with the Taliban” and “she wants to do transgender operations on illegal aliens that are in prison,” and utter this ludicrous slander about Haitian migrants: “They’re eating the dogs, the people that came in. They’re eating the cats. They’re eating — they’re eating the pets of the people that live there.”

Fox News then told its viewers (14 million people watched the simulcaston the network) that they had not seen what they just saw. Unless I missed it, viewers also weren’t told the other news of the night, that Taylor Swift had endorsed Harris after the debate.

Often, after my weekly cataloguing of Trump’s madness and mayhem, readers ask why his followers don’t see that he is off his rocker. This is why. Fox News sane-washes him — and it sets the tone for the entire MAGA social media ecosystem."

Friday, September 13, 2024

Laura Loomer’s Greatest Hits; NewsGuard's Reality Check, September 13, 2024

NewsGuard's Reality Check; Laura Loomer’s Greatest Hits

"Loomer has initiated or promoted 17 of the provably false narratives on significant news topics in NewsGuard’s catalog of False Narratives.

Conservative commentator Laura Loomer has been in the headlines this week amid reports that she has been a regular on former President Donald Trump’s campaign plane and has steered him toward promoting conspiracy theories. These include the debunked claims of Haitian migrants abducting and eating cats and dogs in Springfield, Ohio.  

Loomer’s claims have long been a subject of interest at NewsGuard. Here’s what we know:"

Laura Loomer, a Social-Media Instigator, Is Back at Trump’s Side; The New York Times, September 12, 2024

 , The New York Times; Laura Loomer, a Social-Media Instigator, Is Back at Trump’s Side

"A far-right activist known for her endless stream of sexist, homophobic, transphobic, anti-Muslim and occasionally antisemitic social media posts and public stunts, Ms. Loomer has made a name for herself over the past decade by unabashedly claiming 9/11 was “an inside job,” calling Islam “a cancer,” accusing Ron DeSantis’s wife of exaggerating breast cancer and claiming that President Biden was behind the attempt to assassinate Mr. Trump in July.

Just two days before the debate, Ms. Loomer, 31, posted a racist joke about the vice president, whose mother was Indian American. Ms. Loomer wrote on X that if Ms. Harris won the election, the White House would “smell like curry.

For many observers, including some of Mr. Trump’s most important allies, the Republican presidential nominee’s choice at a critical moment of the campaign to platform a social-media instigator, albeit one with nearly 1.3 million followers on X, was stunning.

“The history of this person is just really toxic,” Senator Lindsey Graham of South Carolina, a Trump ally, told a reporter for HuffPost on Thursday. “I don’t think it’s helpful at all.”

His comments were echoed by Representative Marjorie Taylor Greene, a Republican from Georgia and a devoted supporter of Mr. Trump. “I don’t think that she has the experience or the right mentality to advise a very important presidential election,” Ms. Greene told reporters Thursday morning."

Even Free Libraries Come With a Cost; The National Law Review, September 13, 2024

  Anisa Noorassa of McDermott Will & Emery , The National Law Review; Even Free Libraries Come With a Cost

"The US Court of Appeals for the Second Circuit affirmed a district court’s judgment of copyright infringement against an internet book archive, holding that its free-to-access library did not constitute fair use of the copyrighted books. Hachette Book Group Inc. v. Internet Archive, Case No. 23-1260 (2d Cir. Sept. 4, 2024) (Menashi, Robinson, Kahn, JJ.).

Hachette Book Group, HarperCollins Publishers, John Wiley & Sons, and Penguin Random House (collectively, the publishers) brought suit against Internet Archive alleging that its “Free Digital Library,” which loans copies of the publishers’ books without charge, violated the publishers’ copyrights. Internet Archive argued that its use of the publishers’ copyrighted material fell under the fair use exception to the Copyright Act because Internet Archive acquired physical books and digitized them for borrowing (much like a traditional library) and maintained a 1:1 ratio of borrowed material to physical copies except for a brief period during the COVID-19 pandemic.

The district court reviewed the four statutory fair use factors set forth in § 107 of the Copyright Act:

  • The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.
  • The nature of the copyrighted work.
  • The amount and substantiality of the portion used in relation to the copyrighted work as a whole.
  • The effect of the use upon the potential market for or value of the copyrighted work.

The district court found that Internet Archive’s use of the works was not covered by the fair use exception because its use was non-transformative, was commercial in nature due to its solicitation of donations, and was disruptive of the market for e-book licenses. Internet Archive appealed.

The Second Circuit affirmed, addressing each factor in turn."