Wednesday, July 17, 2024

How Creators Are Facing Hateful Comments Head-On; The New York Times, July 11, 2024

Melina Delkic, The New York Times ; How Creators Are Facing Hateful Comments Head-On

"Experts in online behavior also say that the best approach is usually to ignore nasty comments, as hard as that may be.

“I think it’s helpful for people to keep in mind that hateful comments they see are typically posted by people who are the most extreme users,” said William Brady, an assistant professor at Northwestern University, whose research team studied online outrage by looking at 13 million tweets. He added that the instinct to “punish” someone can backfire.

“Giving a toxic user any engagement (view, like, share, comment) ironically can make their content more visible,” he wrote in an email. “For example, when people retweet toxic content in order to comment on it, they are actually increasing the visibility of the content they intend to criticize. But if it is ignored, algorithms are unlikely to pick them up and artificially spread them further.”"

Tuesday, July 16, 2024

Workday Loses Bid to Toss Bias Claims Over AI Hiring Tools; Bloomberg Law, July 13, 2024

Carmen Castro-Pagán, Bloomberg Law; Workday Loses Bid to Toss Bias Claims Over AI Hiring Tools 

"Workday Inc. must defend against a lawsuit alleging its algorithmic decision-making tools discriminate against job applicants who are Black, over the age of 40, or disabled, according to a federal court opinion on Friday.

The lawsuit adequately alleges that Workday is an agent of its client-employers, and thus falls within the definition of an employer for purposes of federal anti-discrimination laws that protect based on race, age, and disability, the US District Court for the Northern District of California said."

USPTO issues AI subject matter eligibility guidance; United States Patent and Trademark Office (USPTO), July 16, 2024

 United States Patent and Trademark Office (USPTO) ; USPTO issues AI subject matter eligibility guidance

"The U.S. Patent and Trademark Office (USPTO) has issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including in artificial intelligence (AI). This guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 § U.S.C. 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies. 

The guidance update, which goes into effect on July 17, 2024, provides a background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. 

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.” 

The three new examples provide additional analyses under 35 § U.S.C. 101 of hypothetical claims in certain situations to address particular inquiries, such as whether a claim recites an abstract idea or whether a claim integrates the abstract idea into a practical application. They are intended to assist USPTO personnel in applying the USPTO’s subject matter eligibility guidance to AI inventions during patent examination, appeal, and post-grant proceedings. The examples are available on our AI-related resources webpage and our patent eligibility page on our website.  

The USPTO continues to be directly involved in the development of legal and policy measures related to the impact of AI on all forms of intellectual property. The guidance update delivers on the USPTO’s obligations under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence o provide guidance to examiners and the public on the impact of AI and issues at the intersection of AI and IP, including patent subject matter eligibility. This follows our announcement earlier this year on Inventorship guidance for AI-assisted inventions, as well as AI guidance for practitioners and a request for comments on the impact of AI on certain patentability considerations, including what qualifies as prior art and the assessment of the level of ordinary skills in the art (comments accepted until July 29, 2024). 

The full text of the guidance update on patent subject matter eligibility is available on our Latest AI news and reports webpageand the corresponding examples are available on our AI-related resources webpage. The USPTO will accept public comments on the guidance update and the examples through September 16, 2024. Please see the Federal Register Notice for instructions on submitting comments."

Even Disinformation Experts Don’t Know How to Stop It; The New York Times, July 11, 2024

Tiffany Hsu and , The New York Times; Even Disinformation Experts Don’t Know How to Stop It

"Holding the line against misinformation and disinformation is demoralizing and sometimes dangerous work, requiring an unusual degree of optimism and doggedness. Increasingly, however, even the most committed warriors are feeling overwhelmed by the onslaught of false and misleading content online."

Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems; American Bar Association (ABA), June 3, 2024

Gary Rhoades , American Bar Association (ABA); Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems

"The Civil Rights Act of 1968, also known as the Fair Housing Act (FHA), banned housing discrimination nationwide on the basis of race, religion, national origin, and color. One key finding that persuaded Dr. Martin Luther King Jr., President Lyndon Johnson, and others to fight for years for the passage of this landmark law confirmed that many Americans were being denied rental housing because of their race. Black families were especially impacted by the discriminatory rejections. They were forced to move on and spend more time and money to find housing and often had to settle for substandard housing in unsafe neighborhoods and poor school districts to avoid homelessness.

April 2024 marked the 56th year of the FHA’s attempt to end such unfair treatment. Despite the law’s broadly stated protections, its numerous state and local counterparts, and decades of enforcement, landlords’ use of high-tech algorithms for tenant screening threatens to erase the progress made. While employing algorithms to mine data such as criminal records, credit reports, and civil court records to make predictions about prospective tenants might partially remove the fallible human element, old and new biases, especially regarding race and source of income, still plague the screening results."

Corporate directors weigh AI ethics at first-of-its-kind forum; Harvard Gazette, July 11, 2024

Harvard Gazette; Corporate directors weigh AI ethics at first-of-its-kind forum

"As artificial intelligence surges, corporate directors face a set of urgent ethical considerations. What role can they play in fostering responsible practices for using AI in the workplace? Are they already using the bias-prone technology to sort through job applications?

At the inaugural Directors’ AI Ethics Forum, leaders from the business, government, and nonprofit sectors pondered these questions and more. Convening the group on the Harvard Business School campus was the Edmond & Lily Safra Center for Ethics’ Business AI Ethics research team, an initiative that promotes thoughtful approaches to the rapidly evolving technology."

Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86; Associated Press via The Guardian, July 15, 2024

 Associated Press via The Guardian; Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86

"Peter Buxtun, the whistleblower who revealed that the US government allowed hundreds of Black men in rural Alabama to go untreated for syphilis in what became known as the Tuskegee study, has died. He was 86...

Buxtun is revered as a hero to public health scholars and ethicists for his role in bringing to light the most notorious medical research scandal in US history. Documents that Buxtun provided to the Associated Press, and its subsequent investigation and reporting, led to a public outcry that ended the study in 1972.

Forty years earlier, in 1932, federal scientists began studying 400 Black men in Tuskegee, Alabama, who were infected with syphilis. When antibiotics became available in the 1940s that could treat the disease, federal health officials ordered that the drugs be withheld. The study became an observation of how the disease ravaged the body over time...

In his complaints to federal health officials, he drew comparisons between the Tuskegee study and medical experiments Nazi doctors had conducted on Jews and other prisoners. Federal scientists did not believe they were guilty of the same kind of moral and ethical sins, but after the Tuskegee study was exposed, the government put in place new rules about how it conducts medical research. Today, the study is often blamed for the unwillingness of some African Americans to participate in medical research.

“Peter’s life experiences led him to immediately identify the study as morally indefensible and to seek justice in the form of treatment for the men. Ultimately, he could not relent,” said the CDC’s Pestorius."

Monday, July 15, 2024

National Research Act at 50: An Ethics Landmark in Need of an Update; The Hastings Center, July 12, 2024

Mark A. Rothstein and Leslie E. Wolf, The Hastings Center; National Research Act at 50: An Ethics Landmark in Need of an Update

"On July 12, 1974, President Richard M. Nixon signed into law the National Research Act, one of his last major official actions before resigning on August 8. He was preoccupied by Watergate at the time, and there has been speculation about whether he would have done this under less stressful circumstances. But enactment of the NRA was a foregone conclusion. After a series of legislative compromises, the Joint Senate-House Conference Report was approved by bipartisan, veto-proof margins in the Senate (72-14) and House (311-10).

The NRA was a direct response to the infamous Untreated Syphilis Study at Tuskegee whose existence and egregious practices disclosed by whistleblower Peter Buxtun were originally reported by Associated Press journalist Jean Heller in the Washington Star on July 25, 1972.  After congressional hearings exposing multiple research abuses, including the Tuskegee syphilis study, and legislative proposals in 1973, support coalesced around legislation with three main elements: (1) directing preparation of guidance documents on broad research ethics principles and various controversial issues by multidisciplinary experts appointed to a new federal commission, (2) adopting a model of institutional review boards, and (3) establishing federal research regulations applicable to researchers receiving federal funding.

This essay reflects on the NRA at 50. It traces the system of research ethics guidance, review, and regulation the NRA established; assesses how well that model has functioned; and describes some key challenges for the present and future. We discuss some important substantive and procedural gaps in the NRA regulatory structure that must be addressed to respond to the ethical issues raised by modern research." 

Holy See welcomes ‘significant’ new treaty on intellectual property; Vatican News, July 10, 2024

Joseph Tulloch, Vatican News ; Holy See welcomes ‘significant’ new treaty on intellectual property

"Archbishop Ettore Balestrero, the Permanent Observer of the Holy See to the United Nations and Other International Organizations in Geneva, has welcomed a historic new treaty on intellectual property.

In an address to members states of the UN's World Intellectual Property Organisation (WIPO), the Archbishop called the treaty a “significant step forward”.

The treaty


WIPO member states adopted the agreement – which regards “Intellectual Property, Genetic Resources and Associated Traditional Knowledge – in May of this year.

The treaty establishes a new disclosure requirement in international law for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.

It was the first WIPO treaty in over a decade, as well as the first to evr deal with the genetic resources and traditional knowledge of indigenous peoples."

One-third of US military could be robotic by 2039: Milley; Military Times, July 14, 2024

, Military Times; One-third of US military could be robotic by 2039: Milley

"The 20th chairman of the Joint Chiefs of Staff believes growing artificial intelligence and unmanned technology could lead to robotic military forces in the future.

“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” said retired Army Gen. Mark Milley at an Axios event Thursday launching the publication’s Future of Defense newsletter.

He noted these robots could be commanded and controlled by AI systems."

AI Ethics for Peace; L'Osservatore Romano, July 12, 2024

 L'Osservatore Romano; AI Ethics for Peace

"Eleven World Religions, sixteen new signatories, thirteen nations in attendance, more than 150 participants: these are some of the numbers of AI Ethics for Peace, the historic multireligious event held in Hiroshima, Japan, on 9 and 10 July...

The choice to hold this event in Hiroshima has a deeply symbolic meaning, because no other city like it bears witness to the consequences of destructive technology and the need for a lasting quest for peace.

AI Ethics for Peace, over two days, brought together the world’s major religions to underscore their crucial importance in shaping a society in which, in the face of the relentless acceleration of technology, the call for technological development that protects the dignity of each individual human being and the entire planet becomes a reality.

This will be possible only if algorethics, that is, the development and application of an ethics of artificial intelligence, becomes an indispensable element by design, i.e. from the moment of its design.

Remarkable was the talk by Father Paolo Benanti, Professor of Ethics of Technology at the Pontifical Gregorian University, who presented the Hiroshima Addendum on Generative AI. This document focuses on the need for ethical governance of generative AI — an ongoing and iterative process that requires a sustained commitment from all stakeholders so that its potential is used for the good of humanity.

The application of Rome Call principles to the reality of the tech world and the responsibility that AI producers share were witnessed by the attending big tech leaders."

Pope asks world's religions to push for ethical AI development; United States Conference of Catholic Bishops, July 10, 2024

Justin McLellan, United States Conference of Catholic Bishops; Pope asks world's religions to push for ethical AI development

"Pope Francis called on representatives from the world's religions to unite behind the defense of human dignity in an age that will be defined by artificial intelligence.

"I ask you to show the world that we are united in asking for a proactive commitment to protect human dignity in this new era of machines," the pope wrote in a message to participants of a conference on AI ethics which hosted representatives from 11 world religions.

Religious leaders representing Eastern faiths such as Buddhism, Hinduism, Zoroastrianism, and Bahá'í, among others, as well as leaders of the three Abrahamic religions gathered in Hiroshima, Japan, for the conference, titled "AI Ethics for Peace." They also signed the Rome Call for AI Ethics -- a document developed by the Pontifical Academy for Life which asks signatories to promote an ethical approach to AI development."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

After abuse revelations, professors grapple with how to teach Munro; The Washington Post, July 12, 2024

 , The Washington Post; After abuse revelations, professors grapple with how to teach Munro

"Professors are wrestling with how to teach Munro’s work. Bookstores are debating whether to feature it on their shelves. And Canadians are grappling with the age-old question: Is it possible to divorce the art from the artist?"

Class explores how media impacts perceptions of health issues; University of Pittsburgh, University Times, July 11, 2024

MARTY LEVINE, University of Pittsburgh, University Times; Class explores how media impacts perceptions of health issues

"Communicating a message through storytelling, and not the mere recitation of facts, is key to public health communication, and Hoffman collaborates often with the Norman Lear Center at the University of Southern California, whose “Hollywood, Health and Society” project has conducted research on everything from “Increases in calls to the CDC National STD and AIDS hotline following AIDS-related episodes in a soap opera” to “The Impact of Food and Nutrition Messages on The Daily Show with Jon Stewart.” It also provides consultants to shows from “Breaking Bad” to “Black-ish,” and a Lear Center rep spoke in Hoffman’s class.

Hoffman was recently lead author on a published overview of current research evidence on the media and health, which found that “health storylines on fictional television influence viewers.”...

Pitt Public Health was the leader in developing the Salk vaccine for polio, she points out. Public health education and media literacy can be a sort of vaccination against misinformation, she says: “We often talk about it as inoculation. Misinformation is not going away. How can we make people less susceptible to it?”"

Thursday, July 11, 2024

The assignment: Build AI tools for journalists – and make ethics job one; Poynter, July 8, 2024

 , Poynter; The assignment: Build AI tools for journalists – and make ethics job one

"Imagine you had virtually unlimited money, time and resources to develop an AI technology that would be useful to journalists.

What would you dream, pitch and design?

And how would you make sure your idea was journalistically ethical?

That was the scenario posed to about 50 AI thinkers and journalists at Poynter’s recent invitation-only Summit on AI, Ethics & Journalism

The summit drew together news editors, futurists and product leaders June 11-12 in St. Petersburg, Florida. As part of the event, Poynter partnered with Hacks/Hackers, to ask groups attendees to  brainstorm ethically considered AI tools that they would create for journalists if they had practically unlimited time and resources.

Event organizer Kelly McBride, senior vice president and chair of the Craig Newmark Center for Ethics and Leadership at Poynter, said the hackathon was born out of Poynter’s desire to help journalists flex their intellectual muscles as they consider AI’s ethical implications.

“We wanted to encourage journalists to start thinking of ways to deploy AI in their work that would both honor our ethical traditions and address the concerns of news consumers,” she said.

Alex Mahadevan, director of Poynter’s digital media literacy project MediaWise, covers the use of generative AI models in journalism and their potential to spread misinformation."

Religious education group sues Fort Wayne man over copyright claims; The Journal Gazette, July 8, 2024

  , The Journal Gazette; Religious education group sues Fort Wayne man over copyright claims

"LifeWise claims in its lawsuit that Parrish signed up online to volunteer with the hope of publishing information that might damage the organization’s reputation and prompt parents to oppose LifeWise Academy chapters in their communities.

Parrish accessed LifeWise’s information storage systems, downloaded internal documents and posted them along with the LifeWise curriculum on his website, parentsagainstlifewise.online, according to the lawsuit. It said Parrish also posted links to the curriculum on the Facebook group.

“He improperly obtained our entire copyright protected curriculum, and he posted to his website without our permission,” LifeWise said in a statement Monday.

LifeWise tried to get Parrish to voluntarily remove its curriculum, but the complaint said the organization’s efforts – including an attorney’s cease-and-desist letter and social media messages the chief operating officer sent him – were unsuccessful.

The lawsuit said Parrish responded to the letter with a meme stating, “It’s called fair use (expletive).”

LifeWise disagrees. In its statement, the organization said its curriculum is licensed through a publisher called LifeWay, and anyone is welcome to purchase the LifeWay curriculum through its website.

“Posting the entire curriculum is not ‘fair use,’ and we are confident that the judge will agree,” LifeWise said Monday."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

  WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Bridging the Digital Divide: Advancing Access to Broadband for All; American Bar Association (ABA), June 3, 2024

Emily Bergeron, American Bar Association (ABA); Bridging the Digital Divide: Advancing Access to Broadband for All

"The “digital divide” is the disparity in access to and utilization of information and communication technologies between different groups based on socioeconomic status, geographic location, age, education, or other demographic characteristics. This divide often manifests as unequal access to the internet and digital devices, leading to disparities in opportunities, information, health care, education, and participation in government and the digital- and knowledge-based economy. The COVID-19 pandemic brought considerable focus to the digital divide. Individuals with broadband access could work, attend school, shop, and consult with their doctors from the comfort of their homes, while those lacking access had few options...

Eight out of 10 white adults have a broadband connection at home, whereas smaller percentages of Black and Hispanic adults—precisely 71 percent and 65 percent—indicate the same. Notably, Black adults are more likely than white adults to believe that a lack of high-speed internet at home puts people at a significant disadvantage when connecting with medical professionals, with 63 percent of Black adults expressing this view compared to 49 percent of white adults. The perspective of Hispanic adults, at 53 percent, does not significantly differ from that of individuals from other racial and ethnic backgrounds.

Despite federal efforts to expand broadband access in Tribal lands, a significant disparity persists. Approximately 18 percent of people in these areas still lack broadband services, while this figure is only 4 percent for non-Tribal areas. The gap widens further in rural regions, where about 30 percent of individuals on Tribal lands lack broadband access compared to 14 percent in non-Tribal areas...

The digital divide is not just a matter of technology. It undermines social justice and equality. By working collectively to bridge this divide, we can help create a more inclusive, connected, and equitable society where everyone can harness the benefits of the digital age. It is incumbent on governments, policymakers, and private organizations to take proactive measures and commit to digital inclusion, ensuring that no one is left behind in this fast-evolving digital landscape."

Monday, July 8, 2024

10 Things Every Board Member Needs to Know; American Libraries, July 1, 2024

 Sanhita SinhaRoy, American Libraries ; 10 Things Every Board Member Needs to Know

Kip Currier: Preparing Board members for effective, ethical service is vital for all organizations. Surprisingly, the word "ethics" is never specifically mentioned in this article, though ethics is implicated with the words "abiding by the duties of care, loyalty, and honesty" at the very end. Board members need to be aware of ethics principles/codes of organizations where they serve, as well as legal requirements and fiduciary responsibilities that have ethical dimensions in states where their organizations are located. 

[Excerpt]

"As libraries and library workers face censorship attempts, campus protests, and budget cuts, among other challenges, Harrington—a consultant and current president of the Timberland Regional (Wash.) Library board of trustees—led the program “Top 10 Things Every Library Board Member Should Know—but Often Doesn’t.”...

#10 There are specific attributes of an effective nonprofit board member.

They include a commitment to the mission of the organization; understanding of the board’s governance roles; active involvement in board activities and committees; thinking and acting strategically; not being involved in day-to-day management of the organization; abiding by the duties of care, loyalty, and honesty; and supporting the organization financially and through advocacy."

Five Questions to Ask Before Implementing Generative AI; Markkula Center for Applied Ethics at Santa Clara University, July 3, 2024

 Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Five Questions to Ask Before Implementing Generative AI

"While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”"

Sunday, July 7, 2024

Jim Clyburn Is Right About What Democrats Should Do Next; The New York Times, July 7, 2024

Ezra Klein, The New York Times; Jim Clyburn Is Right About What Democrats Should Do Next

Kip Currier: The most important sentence in this Ezra Klein OpEd is this one: 

"What Democrats denied themselves over the past few years was information."

Democracies, and political parties, depend on informed citizenries. Informed citizenries are cultivated and advanced when people have access to accurate, trustworthy information. Without informed citizenries, democracies and political parties are like endangered species that can weaken and disappear.

Access to information is the core principle that information centers -- libraries, archives, museums -- make possible. As New York Public Library Director Anthony Marx has previously underscored, "libraries are in the information access business." 

Information centers serve essential roles for healthy, functioning democracies, political parties, and societies.

Supreme Court ethics remain at center stage after hard-right rulings; The Washington Post; July 6, 2024

, The Washington Post;  Supreme Court ethics remain at center stage after hard-right rulings

"Several experts said the court needs to fully embrace an ethics overhaul to help reassure the public."

Saturday, July 6, 2024

New York’s First Black Librarians Changed the Way We Read; The New York Times, June 19, 2024

 Jennifer Schuessler, The New York Times; New York’s First Black Librarians Changed the Way We Read

"Today, figures like Schomburg and the historian and activist W.E.B. Du Bois (another collector and compiler of Black books) are hailed as the founders of the 20th-century Black intellectual tradition. But increasingly, scholars are also uncovering the important role of the women who often ran the libraries, where they built collections and — just as important — communities of readers.

“Mr. Schomburg’s collection is really the seed,” said Joy Bivins, the current director of the Schomburg Center for Research in Black Culture, as the 135th Street library, currently home to more than 11 million items, is now known. “But in many ways, it is these women who were the institution builders.”

Many were among the first Black women to attend library school, where they learned the tools and the systems of the rapidly professionalizing field. On the job, they learned these tools weren’t always suited to Black books and ideas, so they invented their own.

At times, they battled overt and covert censorship that would be familiar in today’s climate of rising book bans and restrictions on teaching so-called divisive concepts. But whether they worked in world-famous research collections or modest public branch libraries, these pioneers saw their role as not just about tending old books but also about making room for new people and new ideas."

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Friday, July 5, 2024

A.I. ‘Friend’ for Public School Students Falls Flat; The New York Times, July 1, 2024

 Dana Goldstein, The New York Times; A.I. ‘Friend’ for Public School Students Falls Flat

"A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time."

Thursday, July 4, 2024

The AI Ethicist: Fact or Fiction?; SSRN, Wharton University of Pennsylvania, November 20, 2023

 

Christian TerwieschUniversity of Pennsylvania - Operations & Information Management DepartmentLennart MeinckeUniversity of Pennsylvania; The Wharton School, Gideon Nave

University of Pennsylvania - The Wharton School; SSRN, Wharton University of Pennsylvania;

 The AI Ethicist: Fact or Fiction?

"Abstract

This study investigates the efficacy of an AI-based ethical advisor using the GPT-4 model. Drawing from a pool of ethical dilemmas published in the New York Times column “The Ethicist”, we compared the ethical advice given by the human expert and author of the column, Dr. Kwame Anthony Appiah, with AI-generated advice. The comparison is done by evaluating the perceived usefulness of the ethical advice across three distinct groups: random subjects recruited from an online platform, Wharton MBA students, and a panel of ethical decision-making experts comprising academics and clergy. Our findings revealed no significant difference in the perceived value of the advice between human generated ethical advice and AI-generated ethical advice. When forced to choose between the two sources of advice, the random subjects recruited online displayed a slight but significant preference for the AI-generated advice, selecting it 60% of the time, while MBA students and the expert panel showed no significant preference."

AI Chatbots Seem as Ethical as a New York Times Advice Columnist; Scientific American, July 1, 2024

, Scientific American ; AI Chatbots Seem as Ethical as a New York Times Advice Columnist

"In 1691 the London newspaper the Athenian Mercury published what may have been the world’s first advice column. This kicked off a thriving genre that has produced such variations as Ask Ann Landers, which entertained readers across North America for half a century, and philosopher Kwame Anthony Appiah’s weekly The Ethicist column in the New York Times magazine. But human advice-givers now have competition: artificial intelligence—particularly in the form of large language models (LLMs), such as OpenAI’s ChatGPT—may be poised to give human-level moral advice.

LLMs have “a superhuman ability to evaluate moral situations because a human can only be trained on so many books and so many social experiences—and an LLM basically knows the Internet,” says Thilo Hagendorff, a computer scientist at the University of Stuttgart in Germany. “The moral reasoning of LLMs is way better than the moral reasoning of an average human.” Artificial intelligence chatbots lack key features of human ethicists, including self-consciousness, emotion and intention. But Hagendorff says those shortcomings haven’t stopped LLMs (which ingest enormous volumes of text, including descriptions of moral quandaries) from generating reasonable answers to ethical problems.

In fact, two recent studies conclude that the advice given by state-of-the-art LLMs is at least as good as what Appiah provides in the pages of the New York Times. One found “no significant difference” between the perceived value of advice given by OpenAI’s GPT-4 and that given by Appiah, as judged by university students, ethical experts and a set of 100 evaluators recruited online. The results were released as a working paper last fall by a research team including Christian Terwiesch, chair of the Operations, Information and Decisions department at the Wharton School of the University of Pennsylvania."