Wednesday, September 18, 2024

Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

 Kip Currier: Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose?

This is the 5,000th post since this blog launched almost 14 years ago on October 3, 2010. My first post was about a 10/1/10 New York Times article on When Lawyers Can Peek at Facebook. The sentence I referenced as an excerpt from that story was:


"Could the legal world be moving toward a new set of Miranda warnings: “Anything you say, do — or post on Facebook — can be used against you in a court of law”?"

 

Social Media Revisited: What Can We Learn?


The legal world in 2010 -- much of the world, really -- was grappling with what guardrails and guidelines to implement for the then-emerging technology of social media: guardrails like, delineating the line between lawyers accessing the public-facing social media pages of potential jurors (okay 👍) from  lawyers using trickery to unethically gain access to the private social media pages of possible jurors (not okay 👎), as an excerpt from that Times article distinguishes:


“If I’m researching a case I’ll do Google searches,” said Carl Kaplan, senior appellate counsel at the Center for Appellate Litigation and a lecturer at Columbia Law School. “What’s the difference between that and looking at someone’s Facebook?

“I think it’s good that they’re kind of recognizing reality and seeming to draw a line in the sand between public pages and being sneaky and gaining access to private pages in unethical ways.”

The city bar did specifically note that it would be unethical to become someone’s friend through deception. In fact, the four-page opinion went into great detail in describing a hypothetical example of the improper way to go about becoming someone’s Facebook friend.

 

https://archive.nytimes.com/cityroom.blogs.nytimes.com/2010/10/01/when-lawyers-can-peek-at-facebook/?scp=2&sq=facebook%20ethics&st=cse

 

 

AI for Good, AI for Bad: Guardrails and Guidelines


Any of this sound familiar? It should. In today's AI Age, we're grappling again with what guardrails and guidelines to put in place to protect us from the known and unknown dangers of AI for Bad, while encouraging the beneficial innovations of AI for Good. Back in 2010, too, many of us were still getting up to speed with the novelties, ambiguities -- and the costs and benefits -- of using social media. And a lot of us are doing the same thing right now with AI and Generative AI: brainstorming and writing via chatbots like ChatGPT and Claude, making AI-generated images with DALL-E 3 and Adobe Firefly, using AI to develop groundbreaking medical treatments and make extraordinary scientific discoveries, and much, much more.


In 2024, we know more about social media. We've had more lived experiences with the good, the bad, and sometimes the very ugly potentialities and realities of social media. Yes, social  media has made it possible to connect more easily and widely with others. It's also enabled access to information on scales that in the analog era were unimaginable. But, it's also come with real downsides and harms, such as unwelcome practices like cyberbullying, online hate speech, disinformation, and doxxing. Science, too, is uncovering more about the effects of social media and other technologies on our lives in the 2020's. Research, for example, is providing empirical evidence of the deleterious effects of our technology addictions, particularly on the mental health of children who on average admit to using their smartphones 4-7 hours every day. 

 

What Are the Necessary, Vital Questions?


At these beginning stages of the AI revolution, it is advisable for us to practice some additional mindfulness and reflection on where we've been with technology. And where we are going and want to go. To ask some "lessons learned" and "roads not taken" questions -- the "necessary and vital questions" -- that aren't easily answered, like:


  • Would we as citizens -- mothers, fathers, daughters, sons -- have done anything differently (on micro and/or macro scales) about social media back in 2010?
  • What would we have wanted policymakers, elected leaders, for-profit companies, non-profit agencies, board members and trustees, educators, faith leaders, civil watchdogs, cultural heritage institutions, historically disadvantaged peoples, and other stakeholders to have said or done to better equip our societies to use social media more responsibly, more equitably, and more ethically?
  • What frameworks and protections might we have wanted to devise and embed in our systems to verify that what the social media gatekeepers told us they were doing was actually being done?
  • What legal systems and consequences would we have lobbied for and codified to hold social media owners and content distributors accountable for breaches of the public trust?

 

A Content Moderator's Tale


As I write this post, I'm reminded of a Washington Post 9/9/24 article illustrated with comic book-style images that I read last week, titled ‘I quit my job as a content moderator. I can never go back to who I was before.’ The protagonist in the comic, Alberto Cuadra, is identified as a non-U.S. "former content moderator". Think of content moderators as the "essential workers" of the social media ecosystem, like the essential workers during the COVID-19 pandemic lockdowns who kept our communities running while we were safely "sheltering in place" at home. Content moderators are the unsung online warriors who take jobs with tech companies (e.g. Facebook/Meta, YouTube/Alphabet, Twitter-cum-X, TikTok) to clear out the proverbial social media landmines. They do this by peering at the really icky Internet "stuff", the most depraved creations, the most psychologically injurious content that's posted to social media platforms around the world, in order to render it inaccessible to users and shield us from these digitally hazardous materials.


Back to the content moderator story, after suffering with anxiety and other ills caused by interacting with the disturbing content with which he had made a Faustian bargain for the sake of gainful employment, Alberto Cuadra ultimately decides that he has to leave his job. He does this to reclaim his physical and mental health, despite the unnamed company where he works providing "a certain number of free sessions with a therapist" for any employee working there. Alberto's short but powerfully poignant graphic story, made possible by Washington Post reporter Beatrix Lockwood and illustrator Maya Scarpa, ends with a poignant pronouncement:

 

If I ever have children, I won't let them on any media until they're 18.



 The Case for AI/Gen AI Regulation and Oversight


As always when faced with a new technology (whether it's the 15th century printing press or the 20th century Internet), the disciplines of law, ethics, and policy are playing catch-up with new disruptive technologies: namely, AI and Generative AI. Just as state and city bar associations needed to issue ethics opinions for lawyers on the do's and don'ts of using social media for all types of legal tasks a decade and a half ago, 2024 has seen state bars, and just last month the American Bar Association, publish ethics opinions on what lawyers must do and must not do vis-a-vis the use of AI and Generative AI tools. Lawyers don't really have the luxury of not following such rules if they want to keep their licenses active and stay in good standing with bar associations and clients. Are there not sufficient reasons and incentives now, though, for non-lawyers to also spell out more of the do's and don'ts for AI? To express their voices and have policies created and enforced that protect their interests too? To not have the loudest voices in the room be the tech companies and "special interests" that have the most to gain by not having robust regulatory systems, enforcement mechanisms, and penalties that protect everyone from the bad things that bad people can do with technologies like Generative AI?

 

What Can We Do?


Amidst all of the perils and promises of digital and AI technologies, what can people do who want to see more substantive guardrails and guidelines for AI, before we look back 14 years from now, in 2038, and wonder perhaps what we could have done differently, if AI follows a similar or worse trajectory than social media has? While our communities and societies still have a chance to weigh in on what protections and incentives to have for AI, we can join groups that are advocating for regulatory oversight of AI. One thing we know for sure is that being proactive rather than reactive has many advantages in life. First and foremost, it enables us to have more agency, to say what we want and need, and to work toward achieving those goals and aspirations, rather than reacting to someone else's objectives. To that end, we can tell our elected leaders what we want AI to do and not do.


Admittedly, it can feel overwhelming if we approach an issue like what to do about AI/Gen AI as just one person striving to effect change. Yuval Noah Harari, "big thinker" and author of the new book Nexus: A Brief History of Information Networks from the Stone Age to AI, was asked earlier this week what people can do to influence the ways AI is regulated. Harari responded that there is only so much that fifty people working individually can accomplish. But, he underscored, fifty people working together with a collective purpose can achieve so much more. The take-away then is to find or start groups where we can focus our individual talents and energies, with others who share our values, toward common objectives and visions. Some initiatives are bringing together tech companies and stakeholder groups, such as faith leaders, academic researchers, and content producers, with opportunities for dialogue and greater understanding of perspectives, particularly the interests and voices of those who are often underrepresented. I am participating in one such group and will write more about this in the future.

 

I titled this post Emerging Tech and Ethics Redux: Plus ça change, plus c’est la même chose? "The more things change, the more they stay the same", posed in question form. I do not know the answer to that right now, and no human -- or AI system -- can answer that for certain either.

 

  • Will our relationships with emerging technologies like AI and Generative AI tip more toward AI for Good or AI for Bad?

 

  • Is the outcome of our potentially AI-augmented futures predestined and inevitable, or subject to our free wills and intrepid determination?

 

That is up to each and all of us.

 

One final point and an update for this post #5,000


A look back at this blog's Fall 2010 posts during its first few months of existence reveals that the ethics, information, and tech issues we were dealing with then are, unsurprisingly, just as pertinent, and in many cases more impactful, now: 


social media, cyberbullying and online humiliation, media ethics, digital citizenship, privacy, cybertracking, surveillance, data collection ethics, plagiarism, research fraud, human subject protections, cybervigilantism, copyright law, free speech, intellectual freedom, censorship, whistleblowers, conspiracy theories, freedom of information, misinformation, transparency, historically marginalized communities, civility, compassion, respect


In the summer of 2025, my Bloomsbury Libraries Unlimited (BLU) textbook, Ethics, Information, and Technology, will be published. The book will include chapters addressing all of the topics and issues above, and much more. I am very pleased to share the book's cover image below. My sincere thanks and gratitude to all of the individuals who have supported this project and journey.




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.