Thursday, September 26, 2024

Artist sues after US rejects copyright for AI-generated image; Reuters, September 26, 2024

 Blake Brittain, Reuters; Artist sues after US rejects copyright for AI-generated image

"Artist Jason M. Allen asked a Colorado federal court on Thursday to reverse a U.S. Copyright Office decision that rejected copyright protection for an award-winning image he created with artificial intelligence...

A Copyright Office tribunal affirmed the decision last year, finding the image as a whole was not copyrightable because it contained more than a minimal amount of AI-created material.

The office has previously rescinded copyrights for images that artist Kris Kashtanova created using Midjourney. It also rejected a copyright application for an image that computer scientist Stephen Thaler said his AI system created autonomously. Thaler has since appealed...

The case is Allen v. Perlmutter, U.S. District Court for the District of Colorado, No. 1:24-cv-02665."

Perspectives in Artificial Intelligence: Ethical Use; Marquette Today, September 20, 2024

Andrew Goldstein  , Marquette Today; Perspectives in Artificial Intelligence: Ethical Use

"Ethical application 

While artificial intelligence unlocks broad possibilities for positive change, unethical actors have access to these same tools. For instance, companies hoping to grow cigarette sales can target people who are prone to smoking or trying to quit with greater precision. Deepfake videos allow scam callers to imitate the faces and voices of loved ones.  

In this world, it is more important than ever that students be trained on the limits of AI and its proper use cases. 

“We need to think about the societal impact of artificial intelligence; who gets this data, what it’s being used for and how we steer people toward value-creating activities,” Ow says. “Using AI has the potential to improve your life and to provide insights and opportunities for the individual, the community and society."

Wednesday, September 25, 2024

OpenAI, Microsoft, Amazon among first AI Pact signatories; Euronews, September 25, 2024

Cynthia Kroet, Euronews; OpenAI, Microsoft, Amazon among first AI Pact signatories

"OpenAI, Microsoft and Amazon are among 100 companies who are the first to sign up to a voluntary alliance aiming to help usher in new AI legislation, the European Commission said today (25 September)...

The Commission previously said that some 700 companies have shown interest in joining the Pact – which involves voluntary preparatory commitments to help businesses get ready for the incoming AI Act...

The Pact supports industry's voluntary commitments related to easing the uptake of AI in organisations, identifying AI systems likely to be categorised as high-risk under the rules and promoting AI literacy.

In addition to these core commitments, more than half of the signatories committed to additional pledges, including ensuring human oversight, mitigating risks, and transparently labelling certain types of AI-generated content, such as deepfakes, the Commission said...

The AI Act, the world’s first legal framework that regulates AI models according to the risk they pose, entered into force in August."

Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money.; The New York Times, September 25, 2024

Chris Hughes, The New York Times; Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money.

"Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost...

As much as they want to influence Mr. Trump’s policies, they also want to strike back at the Biden-Harris administration, which they believe has unfairly targeted their industry.

More than any other administration in the internet era, President Biden and Ms. Harris have pushed tech companies toward serving the public interest...

Last year, Mr. Andreessen, whose venture capital firm is heavily invested in crypto, wrote a widely discussed “manifesto” claiming that enemy voices of “bureaucracy, vetocracy, gerontocracy” are opposed to the “pursuit of technology, abundance and life.” In a barely concealed critique of the Biden-Harris administration, he argued that those who believe in carefully assessing the impact of new technologies before adopting them are “deeply immoral.”

Hillary Clinton: To err is human, to empathize is superhuman; The Washington Post, September 25, 2024

 Hillary Rodham Clinton, The Washington Post; Hillary Clinton: To err is human, to empathize is superhuman

"Back in the 1990s, from the time she was 15 until she was 20, Shannon was active in the violent white supremacy movement. She attended Ku Klux Klan rallies, tagged public property with swastikas, assaulted people of color, tear-gassed an LGBTQ nightclub and underwent paramilitary training to prepare for the race war her neo-Nazi leaders promised was imminent. Her comrades were white supremacists like the fanatics who years later carried torches through Charlottesville chanting “Jews will not replace us!” and like many of the insurrectionists who stormed the Capitol on Jan. 6, 2021.

Then, remarkably, she managed to get herself out and change her life. Now, Shannon helps people escape violent extremism. She’s seen how the dangerous, hateful movement has metastasized. The rise of social media allowed white power leaders to more easily reach and radicalize thousands of recruits."

NSF and philanthropic partners invest more than $18M to prioritize ethical and societal considerations in the creation of emerging technologies; U.S. National Science Foundation (NSF), September 23, 2024

 U.S. National Science Foundation (NSF); NSF and philanthropic partners invest more than $18M to prioritize ethical and societal considerations in the creation of emerging technologies

"The U.S. National Science Foundation announced an inaugural investment of more than $18 million to 44 multidisciplinary, multi-sector teams across the U.S. through the NSF Responsible Design, Development and Deployment of Technologies (NSF ReDDDoT) program. NSF ReDDDoT invests in the creation of technologies that promote the public's well-being and mitigate potential harms by seeking to ensure that ethical, legal, community and societal considerations are embedded in the lifecycle of technology's creation and use. NSF launched this program in collaboration with leading philanthropic partners including the Ford Foundation, the Patrick J. McGovern Foundation and Siegel Family Endowment.

"NSF is committed to creating mutually beneficial research collaborations among diverse partners who contribute their expertise and resources to accelerating technology innovation that positively addresses pressing national, societal and geostrategic challenges," said Erwin Gianchandani, assistant director for Technology, Innovation and Partnerships. "Through a robust public-private partnership with philanthropies, NSF's investment in ReDDDoT aims to ensure that TIP advances the design, development and deployment of new technologies responsibly. This investment is consistent with the 'CHIPS and Science Act of 2022,' in which Congress called upon TIP to invest in exactly this approach when pursuing the key technology areas listed in that law."

NSF awarded 30 teams Phase 1 funding: 21 teams will receive planning grants of up to $300,000 each for up to two years to facilitate collaborative transdisciplinary and multi-sector activities to plan for submission of larger proposals, while an additional nine teams will receive Phase 1 funding of up to $75,000 each to plan and host workshops designed to raise awareness and identify relevant approaches and needs in the key technology areas identified in the "CHIPS and Science Act of 2022."

Additionally, NSF awarded Phase 2 funding to 14 teams that demonstrated maturity in artificial intelligence, biotechnology, or natural and anthropogenic disaster prevention or mitigation, key technology areas in the statute that TIP emphasized for ReDDDoT funding. Each Phase 2 team will receive up to $1.5 million over three years to expand upon their identified experience in use-inspired and translational activities in responsible design, development and deployment of innovative technology.

The ReDDDoT program invited proposals from teams that examined and demonstrated the principles, methodologies and impacts associated with ethical, legal, community and societal considerations of technology's creation and use, especially those specified in the "CHIPS and Science Act of 2022."NSF anticipates issuing a second ReDDDoT funding opportunity in the future that will build on this round of funding to ensure ethical, legal, community, and societal considerations are embedded in the lifecycle of technology’s creation.

NSF ReDDDot Awardees

Awardees are grouped by award type and then listed in alphabetical order by organization. The full award list can be found on NSF Award Search webpage."

Mark Zuckerberg Is Done With Politics; The New York Times, September 24, 2024

Theodore Schleifer and , The New York Times; Mark Zuckerberg Is Done With Politics

"Instead of publicly engaging with Washington, Mr. Zuckerberg is repairing relationships with politicians behind the scenes. After the “Zuckerbucks” criticism, Mr. Zuckerberg hired Brian Baker, a prominent Republican strategist, to improve his positioning with right-wing media and Republican officials. In the lead-up to November’s election, Mr. Baker has emphasized to Mr. Trump and his top aides that Mr. Zuckerberg has no plans to make similar donations, a person familiar with the discussions said.

Mr. Zuckerberg has yet to forge a relationship with Vice President Kamala Harris. But over the summer, Mr. Zuckerberg had his first conversations with Mr. Trump since he left office, according to people familiar with the conversations."

Mark Zuckerberg Isn’t Done With Politics. His Politics Have Just Changed.; Mother Jones, September 24, 2024

 Tim Murphy, Mother Jones; Mark Zuckerberg Isn’t Done With Politics. His Politics Have Just Changed.

"On Tuesday, the New York Times reported that one of the world’s richest men had recently experienced a major epiphany. After bankrolling a political organization that supported immigration reform, espousing his support for social justice, and donating hundreds of millions of dollars to support local election workers during the 2020 election, “Mark Zuckerberg is done with politics.”

The Facebook founder and part-time Hawaiian feudal lord, according to the piece, “believed that both parties loathed technology and that trying to continue engaging with political causes would only draw further scrutiny to their company,” and felt burned by the criticism he has faced in recent years, on everything from the proliferation of disinformation on Facebook to his investment in election administration (which conservatives dismissively referred to as “Zuckerbucks”). He is mad, in other words, that people are mad at him, and it has made him rethink his entire theory of how the world works.

It’s an interesting piece, which identifies a real switch in how Zuckerberg—who along with his wife, Priscilla Chan, has made a non-binding pledge to give away a majority of his wealth by the end of his lifetime—thinks about his influence and his own ideology. But there’s a fallacy underpinning that headline: Zuckerberg isn’t done with politics. His politics have simply changed."

Meta Fails to Block Zuckerberg Deposition in AI Copyright Suit; Bloomberg Law, September 25, 2024

 Aruni Soni, Bloomberg Law; Meta Fails to Block Zuckerberg Deposition in AI Copyright Suit

"A federal magistrate judge opened the door to a deposition of Meta Platforms Inc. CEO Mark Zuckerberg in a copyright lawsuit over the tech company’s large language model, denying the social media giant’s bid for a protective order.

Magistrate Judge Thomas S. Hixson denied the request to block the deposition because the plaintiffs supplied enough evidence that Zuckerberg is the “chief decision maker and policy setter for Meta’s Generative AI branch and the development of the large language models at issue in this action,” he said in the order filed Tuesday in the US District Court for the Northern District."

OpenAI Training Data to Be Inspected in Authors’ Copyright Cases; Hollywood Reporter, September 24, 2024

  Winston Cho, Hollywood Reporter; OpenAI Training Data to Be Inspected in Authors’ Copyright Cases

"For the first time, OpenAI will provide access to its training data for review of whether copyrighted works were used to power its technology.

In a Tuesday filing, authors suing the Sam Altman-led firm and OpenAI indicated that they came to terms on protocols for inspection of the information. They’ll seek details related to the incorporation of their works in training datasets, which could be a battleground in the case that may help establish guardrails for the creation of automated chatbots...

U.S. District Judge Vince Chhabria at a hearing on Friday questioned whether the attorneys can adequately represent the writers.

“It’s very clear to me from the papers, from the docket and from talking to the magistrate judge that you have brought this case and you have not done your job to advance it,” Chhabria said, according to Politico. “You and your team have barely been litigating the case. That’s obvious… This is not your typical proposed class action. This is an important case. It’s an important societal issue. It’s important for your clients.”

‘War Game,’ political violence, and the risk of extremism in the armed forces; 1A, WAMU, September 24, 2025

Michael Falero, 1A, WAMU; ‘War Game,’ political violence, and the risk of extremism in the armed forces

"One of the hallmarks of American democracy is upholding the principle of the peaceful transfer of power. Key to that principle is the word “peaceful.”

On Jan. 6, 2021 that principle was tested. Insurrectionists, some organized, tried and failed to disrupt the certification of the 2020 election.

What if that violence happened in a future American election? What if extremist groups who felt democracy was at risk included a small number of active members of the military or National Guard? 

A new documentary, “War Game,” follows an effort to play out that scenario, to see how bipartisan participants, including former politicians and retired military officers, would react to election violence from a fake White House situation room. 

What are the risks in responding to violence around an election outcome? How would that situation become more complicated if a handful of members of the military were involved in trying to overturn an electionAnd how common is it for extremism to enter the ranks of branches of the military?

We discuss those questions and the film, spoiler free.

Janessa Goldbeck

CEO, Vet Voice Foundation

Jesse Moss

co-director and producer, “War Game"

Nikki Wentling

disinformation and extremism reporter, The Military Times...


  • [Interviewer Jen White] So we hear there from your own experience, Janessa, that part of this exercise is about your experience with your father, and you connect it to the weakening of democratic institutions. If this is an exercise looking at the strength of our institutions, like our branches of government, what did you take away from the scenario about their resilience in the face of political violence and the resilience of the leaders within those institutions?


  • [Janessa GoldbeckCEO, Vet Voice Foundation]

    You know, one thing that folks always come up to me after screenings, about is is this part of the film in particular. So many Americans around the country are facing a deep divide within their own homes. Someone in their family who is, very invested in a conspiracy theory or an ideology that feels completely alien. You know, and I obviously have experienced that with my own father. I think a lot of people have some a family member, someone they love that, is ascribed to a belief system that feels just impossible to wrap your your hands around. And I think that's something that isn't necessarily going to be solved by government alone. It's something that we need to invest in, programs and ways that we can actually just kinda build bridges in this country. I think, you know, that surgeon general of the United States for the first time has declared loneliness an epidemic in this country. And I think that some of that loneliness is is building is providing or or fueling this need for people to connect, and they find that connection, in spaces where conspiracy theories and extremist ideology flourish. So I don't know that it's necessarily a problem for government to solve on its own. It's certainly a challenge. I don't have all the answers, but I think more conversation is required, and war game is a provocation for that conversation."

Tuesday, September 24, 2024

New State Laws Are Fueling a Surge in Book Bans; The New York Times, September 23, 2024

 , The New York Times; New State Laws Are Fueling a Surge in Book Bans

"Books have been challenged and removed from schools and libraries for decades, but around 2021, these instances began to skyrocket, fanned by a network of conservative groups and the spread on social media of lists of titles some considered objectionable...

PEN considers any book that has been removed from access to have been banned, even if the book is eventually put back...

The American Library Association also released a report on Monday based on preliminary data. The group gathers its own information, and relies on a different definition of what constitutes a book ban. For the library association, a book must be removed — not just temporarily, while it is reviewed — to count as being banned...

The library association and PEN America both emphasized that these numbers were almost certainly an undercount. Both groups rely on information from local news reports, but in many districts across the country, there is no education reporter keeping tabs."

PEN America: Books bans doubled in 2023-2024 school year, most from Florida, Iowa; Florida Times-Union, September 24, 2024

 C. A. Bridges, Florida Times-Union; PEN America: Books bans doubled in 2023-2024 school year, most from Florida, Iowa

"In the 2022-2023 school year, Florida led the nation in the surge of book challenges and bans, according to free expression advocacy group PEN America. This year, the number of bans has more than doubled.

Research by the nonprofit organization found more than 10,000 instances of book bans across the country, with 8,000 of them from Florida and Iowa, according to preliminary findings released Monday at the start of Banned Books Week. This was largely due to new state laws, PEN America's Kasey Meehan and Sabrina Baêta said.

Florida's HB 1069, which went into effect July 2023, required that any book challenged for "sexual conduct" must be removed during the review process and empowered parents and guardians to challenge books without providing ways for parents or guardians to defend them. That led to a "significant rise in book bans" during the 2023-2024 school year, PEN America said."

AI Art Copyright Stays Doubtful After Appeals Court Argument ; Bloomberg Law, September 19, 2024

Kyle JahnerAruni Soni , Bloomberg Law; AI Art Copyright Stays Doubtful After Appeals Court Argument 

"The first federal appeals court battle over the boundaries of copyright law’s application to AI-generated works carries huge implications for creative industries given the rapid proliferation of the technology. The circumstances upon which copyright vests in work wholly or partly created by AI and who gets to control and enforce that right will hinge on interpretations of cases like Thaler’s."

Censorship Throughout the Centuries; American Libraries, September 3, 2024

Cara S. Bertram , American Libraries; Censorship Throughout the Centuries

"American Libraries travels through time to outline our country’s history of censorship—and the library workers, authors, and advocates who have defended the right to read."

American Library Association reveals preliminary data on 2024 book challenges; American Library Association (ALA), September 23, 2024

 American Library Association (ALA); American Library Association reveals preliminary data on 2024 book challenges

"New data shows a slowdown in challenge reports

The American Library Association has released preliminary data documenting attempts to censor books and materials in public, school, and academic libraries during the first eight months of 2024 in preparation for Banned Books Week (September 22-28, 2024).

Between January 1 and August 31, 2024, ALA’s Office for Intellectual Freedom tracked 414 attempts to censor library materials and services. In those cases, 1,128 unique titles were challenged. In the same reporting period last year, ALA tracked 695 attempts with 1,915 unique titles challenged. Though the number of reports to date has declined in 2024, the number of documented attempts to censor books continues to far exceed the numbers prior to 2020. Additionally, instances of soft censorship, where books are purchased but placed in restricted areas, not used in library displays, or otherwise hidden or kept off limits due to fear of challenges illustrate the impact of organized censorship campaigns on students’ and readers’ freedom to read. In some circumstances, books have been preemptively excluded from library collections, taken off the shelves before they are banned, or not purchased for library collections in the first place.

“As these preliminary numbers show, we must continue to stand up for libraries and challenge censorship wherever it occurs,” said American Library Association President Cindy Hohl. “We know library professionals throughout the country are committed to preserving our freedom to choose what we read and what our children read, even though many librarians face criticism and threats to their livelihood and safety. We urge everyone to join librarians in defending the freedom to read. We know people don’t like being told what they are allowed to read, and we’ve seen communities come together to fight back and protect their libraries and schools from the censors.”

The Office for Intellectual Freedom compiles data on book challenges from reports by library professionals in the field and from news stories published throughout the United States. Because many book challenges are not reported to the ALA or covered by the press, the 2024 data compiled by ALA represents only a snapshot of book censorship throughout the first eight months of the year. 

As ALA continues to document the harms of censorship, we celebrate those whose advocacy and support are helping to end censorship in our libraries."

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Monday, September 23, 2024

Looking for a Superhero? Check the Public Library.; The New York Times, September 23, 2024

, The New York Times; Looking for a Superhero? Check the Public Library.

"One of the most absorbing books I’ve read this year is “That Librarian: The Fight Against Book Banning in America” by Amanda Jones, a school librarian’s account of being targeted by right-wing extremists in Louisiana for speaking in defense of diverse books...

There are actual groomers among us, a crime Ms. Jones takes care to decry, but the only “crime” she committed was speaking in defense of intellectual freedom at a public meeting. For that she was bombarded with unrelenting condemnation and death threats...

“All members of our community deserve to be seen, have access to information, and see themselves in our public library,” she said when it was her turn to speak at the meeting. “Just because you enter a library, it does not mean that you will not see something you don’t like. Libraries have diverse collections with resources from many points of view, and a library’s mission is to provide access to information for all users.”"

Generative AI and Legal Ethics; JD Supra, September 20, 2024

Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Wednesday, September 18, 2024

Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

 Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

This is the 5,000th post since this blog launched almost 14 years ago on October 3, 2010. My first post was about a 10/1/10 New York Times article on When Lawyers Can Peek at Facebook. The sentence I referenced as an excerpt from that story was:


"Could the legal world be moving toward a new set of Miranda warnings: “Anything you say, do — or post on Facebook — can be used against you in a court of law”?"

 

Social Media Revisited: What Can We Learn?


The legal world in 2010 -- much of the world, really -- was grappling with what guardrails and guidelines to implement for the then-emerging technology of social media: guardrails like, delineating the line between lawyers accessing the public-facing social media pages of potential jurors (okay 👍) from  lawyers using trickery to unethically gain access to the private social media pages of possible jurors (not okay 👎), as an excerpt from that Times article distinguishes:


“If I’m researching a case I’ll do Google searches,” said Carl Kaplan, senior appellate counsel at the Center for Appellate Litigation and a lecturer at Columbia Law School. “What’s the difference between that and looking at someone’s Facebook?

“I think it’s good that they’re kind of recognizing reality and seeming to draw a line in the sand between public pages and being sneaky and gaining access to private pages in unethical ways.”

The city bar did specifically note that it would be unethical to become someone’s friend through deception. In fact, the four-page opinion went into great detail in describing a hypothetical example of the improper way to go about becoming someone’s Facebook friend.

 

https://archive.nytimes.com/cityroom.blogs.nytimes.com/2010/10/01/when-lawyers-can-peek-at-facebook/?scp=2&sq=facebook%20ethics&st=cse

 

 

AI for Good, AI for Bad: Guardrails and Guidelines


Any of this sound familiar? It should. In today's AI Age, we're grappling again with what guardrails and guidelines to put in place to protect us from the known and unknown dangers of AI for Bad, while encouraging the beneficial innovations of AI for Good. Back in 2010, too, many of us were still getting up to speed with the novelties, ambiguities -- and the costs and benefits -- of using social media. And a lot of us are doing the same thing right now with AI and Generative AI: brainstorming and writing via chatbots like ChatGPT and Claude, making AI-generated images with DALL-E 3 and Adobe Firefly, using AI to develop groundbreaking medical treatments and make extraordinary scientific discoveries, and much, much more.


In 2024, we know more about social media. We've had more lived experiences with the good, the bad, and sometimes the very ugly potentialities and realities of social media. Yes, social  media has made it possible to connect more easily and widely with others. It's also enabled access to information on scales that in the analog era were unimaginable. But, it's also come with real downsides and harms, such as unwelcome practices like cyberbullying, online hate speech, disinformation, and doxxing. Science, too, is uncovering more about the effects of social media and other technologies on our lives in the 2020's. Research, for example, is providing empirical evidence of the deleterious effects of our technology addictions, particularly on the mental health of children who on average admit to using their smartphones 4-7 hours every day. 

 

What Are the Necessary, Vital Questions?


At these beginning stages of the AI revolution, it is advisable for us to practice some additional mindfulness and reflection on where we've been with technology. And where we are going and want to go. To ask some "lessons learned" and "roads not taken" questions -- the "necessary and vital questions" -- that aren't easily answered, like:


  • Would we as citizens -- mothers, fathers, daughters, sons -- have done anything differently (on micro and/or macro scales) about social media back in 2010?
  • What would we have wanted policymakers, elected leaders, for-profit companies, non-profit agencies, board members and trustees, educators, faith leaders, civil watchdogs, cultural heritage institutions, historically disadvantaged peoples, and other stakeholders to have said or done to better equip our societies to use social media more responsibly, more equitably, and more ethically?
  • What frameworks and protections might we have wanted to devise and embed in our systems to verify that what the social media gatekeepers told us they were doing was actually being done?
  • What legal systems and consequences would we have lobbied for and codified to hold social media owners and content distributors accountable for breaches of the public trust?

 

A Content Moderator's Tale


As I write this post, I'm reminded of a Washington Post 9/9/24 article illustrated with comic book-style images that I read last week, titled ‘I quit my job as a content moderator. I can never go back to who I was before.’ The protagonist in the comic, Alberto Cuadra, is identified as a non-U.S. "former content moderator". Think of content moderators as the "essential workers" of the social media ecosystem, like the essential workers during the COVID-19 pandemic lockdowns who kept our communities running while we were safely "sheltering in place" at home. Content moderators are the unsung online warriors who take jobs with tech companies (e.g. Facebook/Meta, YouTube/Alphabet, Twitter-cum-X, TikTok) to clear out the proverbial social media landmines. They do this by peering at the really icky Internet "stuff", the most depraved creations, the most psychologically injurious content that's posted to social media platforms around the world, in order to render it inaccessible to users and shield us from these digitally hazardous materials.


Back to the content moderator story, after suffering with anxiety and other ills caused by interacting with the disturbing content with which he had made a Faustian bargain for the sake of gainful employment, Alberto Cuadra ultimately decides that he has to leave his job. He does this to reclaim his physical and mental health, despite the unnamed company where he works providing "a certain number of free sessions with a therapist" for any employee working there. Alberto's short but powerfully poignant graphic story, made possible by Washington Post reporter Beatrix Lockwood and illustrator Maya Scarpa, ends with a poignant pronouncement:

 

If I ever have children, I won't let them on any media until they're 18.



 The Case for AI/Gen AI Regulation and Oversight


As always when faced with a new technology (whether it's the 15th century printing press or the 20th century Internet), the disciplines of law, ethics, and policy are playing catch-up with new disruptive technologies: namely, AI and Generative AI. Just as state and city bar associations needed to issue ethics opinions for lawyers on the do's and don'ts of using social media for all types of legal tasks a decade and a half ago, 2024 has seen state bars, and just last month the American Bar Association, publish ethics opinions on what lawyers must do and must not do vis-a-vis the use of AI and Generative AI tools. Lawyers don't really have the luxury of not following such rules if they want to keep their licenses active and stay in good standing with bar associations and clients. Are there not sufficient reasons and incentives now, though, for non-lawyers to also spell out more of the do's and don'ts for AI? To express their voices and have policies created and enforced that protect their interests too? To not have the loudest voices in the room be the tech companies and "special interests" that have the most to gain by not having robust regulatory systems, enforcement mechanisms, and penalties that protect everyone from the bad things that bad people can do with technologies like Generative AI?

 

What Can We Do?


Amidst all of the perils and promises of digital and AI technologies, what can people do who want to see more substantive guardrails and guidelines for AI, before we look back 14 years from now, in 2038, and wonder perhaps what we could have done differently, if AI follows a similar or worse trajectory than social media has? While our communities and societies still have a chance to weigh in on what protections and incentives to have for AI, we can join groups that are advocating for regulatory oversight of AI. One thing we know for sure is that being proactive rather than reactive has many advantages in life. First and foremost, it enables us to have more agency, to say what we want and need, and to work toward achieving those goals and aspirations, rather than reacting to someone else's objectives. To that end, we can tell our elected leaders what we want AI to do and not do.


Admittedly, it can feel overwhelming if we approach an issue like what to do about AI/Gen AI as just one person striving to effect change. Yuval Noah Harari, "big thinker" and author of the new book Nexus: A Brief History of Information Networks from the Stone Age to AI, was asked earlier this week what people can do to influence the ways AI is regulated. Harari responded that there is only so much that fifty people working individually can accomplish. But, he underscored, fifty people working together with a collective purpose can achieve so much more. The take-away then is to find or start groups where we can focus our individual talents and energies, with others who share our values, toward common objectives and visions. Some initiatives are bringing together tech companies and stakeholder groups, such as faith leaders, academic researchers, and content producers, with opportunities for dialogue and greater understanding of perspectives, particularly the interests and voices of those who are often underrepresented. I am participating in one such group and will write more about this in the future.

 

I titled this post Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose? "The more things change, the more they stay the same", posed in question form. I do not know the answer to that right now, and no human -- or AI system -- can answer that for certain either.

 

  • Will our relationships with emerging technologies like AI and Generative AI tip more toward AI for Good or AI for Bad?

 

  • Is the outcome of our potentially AI-augmented futures predestined and inevitable, or subject to our free wills and intrepid determination?

 

That is up to each and all of us.

 

One final point and an update for this post #5,000


A look back at this blog's Fall 2010 posts during its first few months of existence reveals that the ethics, information, and tech issues we were dealing with then are, unsurprisingly, just as pertinent, and in many cases more impactful, now: 


social media, cyberbullying and online humiliation, media ethics, digital citizenship, privacy, cybertracking, surveillance, data collection ethics, plagiarism, research fraud, human subject protections, cybervigilantism, copyright law, free speech, intellectual freedom, censorship, whistleblowers, conspiracy theories, freedom of information, misinformation, transparency, historically marginalized communities, civility, compassion, respect


In the summer of 2025, my Bloomsbury Libraries Unlimited (BLU) textbook, Ethics, Information, and Technology, will be published. The book will include chapters addressing all of the topics and issues above, and much more. I am very pleased to share the book's cover image below. My sincere thanks and gratitude to all of the individuals who have supported this project and journey.