Monday, July 29, 2024

The COPIED Act Is an End Run around Copyright Law; Public Knowledge, July 24, 2024

 Lisa Macpherson , Public Knowledge; The COPIED Act Is an End Run around Copyright Law

"Over the past week, there has been a flurry of activity related to the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. While superficially focused on helping people understand when they are looking at content that has been created or altered using artificial intelligence (AI) tools, this overly broad bill makes an end run around copyright law and restricts how everyone – not just huge AI developers – can use copyrighted work as the basis of new creative expression. 

The COPIED Act was introduced in the Senate two weeks ago by Senators Maria Cantwell (D-WA, and Chair of the Commerce Committee); Marsha Blackburn (R-TN); and Martin Heinrich (D-NM). By the end of last week, we learned there may be a hearing and markup on the bill within days or weeks. The bill directs agency action on standards for detecting and labeling synthetic content; requires AI developers to allow the inclusion of these standards on content; and prohibits the use of such content to generate new content or train AI models without consent and compensation from creators. It allows for enforcement by the Federal Trade Commission and state attorneys general, and for private rights of action. 

We want to say unequivocally that this is the wrong bill, at the wrong time, from the wrong policymakers, to address complex questions of copyright and generative artificial intelligence."

Lawyers using AI must heed ethics rules, ABA says in first formal guidance; Reuters, July 29, 2024

S, Reuters; Lawyers using AI must heed ethics rules, ABA says in first formal guidance

"Lawyers must guard against ethical lapses if they use generative artificial intelligence in their work, the American Bar Association said on Monday.

In its first formal ethics opinion on generative AI, an ABA committee said lawyers using the technology must "fully consider" their ethical obligations to protect clients, including duties related to lawyer competence, confidentiality of client data, communication and fees...

Monday's opinion from the ABA's ethics and professional responsibility committee said AI tools can help lawyers increase efficiency but can also carry risks such as generating inaccurate output. Lawyers also must try to prevent inadvertent disclosure or access to client information, and should consider whether they need to tell a client about their use of generative AI technologies, it said."

Joe Biden: My plan to reform the Supreme Court and ensure no president is above the law; The Washington Post, July 29, 2024

Joe Biden , The Washington Post; Joe Biden: My plan to reform the Supreme Court and ensure no president is above the law

"That’s why — in the face of increasing threats to America’s democratic institutions — I am calling for three bold reforms to restore trust and accountability to the court and our democracy.

First, I am calling for a constitutional amendment called the No One Is Above the Law Amendment. It would make clear that there is noimmunity for crimes a former president committed while in office. I share our Founders’ belief that the president’s power is limited, not absolute. We are a nation of laws — not of kings or dictators.

Second, we have had term limits for presidents for nearly 75 years. We should have the same for Supreme Court justices. The United States is the only major constitutional democracy that gives lifetime seats to its high court. Term limits would help ensure that the court’s membership changes with some regularity. That would make timing for court nominations more predictable and less arbitrary. It would reduce the chance that any single presidency radically alters the makeup of the court for generations to come. I support a system in which the president would appoint a justice every two years to spend 18 years in active service on the Supreme Court.

Third, I’m calling for a binding code of conduct for the Supreme Court. This is common sense. The court’s current voluntary ethics code is weak and self-enforced. Justices should be required to disclose gifts, refrain from public political activity and recuse themselves from cases in which they or their spouses have financial or other conflicts of interest. Every other federal judge is bound by an enforceable code of conduct, and there is no reason for the Supreme Court to be exempt.

All three of these reforms are supported by a majority of Americans — as well as conservative and liberal constitutional scholars. And I want to thank the bipartisan Presidential Commission on the Supreme Court of the United States for its insightful analysis, which informed some of these proposals.

We can and must prevent the abuse of presidential power. We can and must restore the public’s faith in the Supreme Court. We can and must strengthen the guardrails of democracy.

In America, no one is above the law. In America, the people rule."

Sunday, July 28, 2024

A.I. May Save Us, or May Construct Viruses to Kill Us; The New York Times, July 27, 2024

 NICHOLAS KRISTOF, The New York Times; A.I. May Save Us, or May Construct Viruses to Kill Us

"Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire."

Friday, July 26, 2024

In Hiroshima, a call for peaceful, ethical AI; Cisco, The Newsroom, July 18, 2024

Kevin Delaney , Cisco, The Newsroom; In Hiroshima, a call for peaceful, ethical AI

"“Artificial intelligence is a great tool with unlimited possibilities of application,” Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, said in an opening address at the AI Ethics for Peace conference in Hiroshima this month.

But Paglia was quick to add that AI’s great promise is fraught with potential dangers.

“AI can and must be guided so that its potential serves the good since the moment of its design,” he stressed. “This is our common responsibility.”

The two-day conference aimed to further the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promoted an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.

This month’s Hiroshima conference drew dozens of global religious, government, and technology leaders to a city that has transcended its dark past of tech-driven, atomic destruction to become a center for peace and cooperation.

The overarching goal in Hiroshima? To ensure that, unlike atomic energy, artificial intelligence is used only for peace and positive human advancement. And as an industry leader in AI innovation and its responsible use, Cisco was amply represented by Dave West, Cisco’s president for Asia Pacific, Japan, and Greater China (APJC)."

Students Weigh Ethics of Using AI for College Applications; Education Week via GovTech, July 24, 2024

Alyson Klein , Education Week via GovTech; Students Weigh Ethics of Using AI for College Applications

"About a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays, according to research released this month by foundry10, an organization focused on improving learning.

About half of those students — or roughly one in six students overall — used AI the way Makena did, to brainstorm essay topics or polish their spelling and grammar. And about 6 percent of students overall—including some of Makena's classmates, she said — relied on AI to write the final drafts of their essays instead of doing most of the writing themselves.

Meanwhile, nearly a quarter of students admitted to Harvard University's class of 2027 paid a private admissions consultant for help with their applications.

The use of outside help, in other words, is rampant in college admissions, opening up a host of questions about ethics, norms, and equal opportunity.

Top among them: Which — if any — of these students cheated in the admissions process?

For now, the answer is murky."

Thursday, July 25, 2024

Philip Glass Says Crimean Theater Is Using His Music Without Permission; The Daily Beast, July 25, 2024

Clay Walker, The Daily Beast; Philip Glass Says Crimean Theater Is Using His Music Without Permission

"Legendary American composer Philip Glass had some harsh words after learning that a theater in Russian-annexed Crimea plans to use his music and name as part of a new show. In a letter posted to X, Glass explained that he had learned a new ballet called Wuthering Heights is set to open at the Sevastopol Opera and Ballet Theater—using works he had penned without his consent. “No permission for the use of my music in the ballet or the use of my name in the advertising and promotion of the ballet was ever requested of me or given by me. The use of my music and the use of my name without my consent is in violation of the Berne Convention for the Protection of Literary and Artistic works to which the Russian Federation is a signatory. It is an act of piracy,” Glass wrote."

Dujardin’s career in tatters after horse whipping costs her damehood and funding; The Guardian, July 24, 2024

 , The Guardian; Dujardin’s career in tatters after horse whipping costs her damehood and funding

"The video of the Team GB ­equestrian star Charlotte Dujardin ­whipping a horse 24 times in a private ­coaching session has cost her a damehood, ­official sources have told the Guardian.

Dujardin was widely expected to be handed the honour if she won another dressage medal in Paris. That would give the 39-year-old seven ­medals, moving one ahead of Laura Kenny to become Britain’s most decorated female Olympian in her own right. However, Whitehall sources have confirmed that any such honour is off the table.

Dujardin now finds her career in tatters after being kicked out of the Olympics and suspended for six months. To compound her ­problems, UK Sport has also suspended her lottery funding after the video of her ­hitting the horse became public.

In a statement, UK Sport said it was “disturbed by the serious concerns that have been raised in the past 24 hours regarding horse welfare and Charlotte Dujardin. We expect all staff and athletes in Olympic and ­Paralympic sport to adhere to the highest standards of behaviour, ­ethics and integrity.”

The Fundamental Jewish Principles Guiding the Ethical Use of AI; Chabad.org, July 24, 2024

Jonathan Gabay , Chabad.org; The Fundamental Jewish Principles Guiding the Ethical Use of AI

"Jonathan Gabay is a respected figure in the world of communications, public relations, and branding. Blending informed analysis of the tactics of political influence both ancient and modern, he confronts the weaponization of AI and deep-fakes that don’t just disrupt but threaten democracy itself. Jonathan is known for his depth of understanding of human psychology and for applying this knowledge to create powerful political brands and campaigns."

A new tool for copyright holders can show if their work is in AI training data; MIT Technology Review, July 25, 2024

, MIT Technology Review; A new tool for copyright holders can show if their work is in AI training data

"Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. 

Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. 

These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these.  

The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves." 

Elena Kagan Endorses High Court Ethics Enforcement Mechanism; Bloomberg Law, July 25, 2024

Suzanne Monyak, Lydia Wheeler , Bloomberg Law; Elena Kagan Endorses High Court Ethics Enforcement Mechanism

"Justice Elena Kagan proposed Chief Justice John Roberts appoint a panel of judges to enforce the US Supreme Court’s code of conduct.

While speaking Thursday at a judicial conference in Sacramento, California, Kagan said she trusts Roberts and if he creates “some sort of committee of highly respected judges with a great deal of experience and a reputation for fairness,” that seems like a good solution...

Kagan, in response to a moderator’s question at the US Court of Appeals for the Ninth Circuit’s annual judicial conference, acknowledged there are difficulties in deciding who should enforce an ethics code for the justices.

“But I feel as though we, however hard it is, that we could and should try to figure out some mechanism for doing this,” she said."

Who will control the future of AI?; The Washington Post, July 25, 2024

, The Washington Post; Who will control the future of AI?

"Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?"

Tuesday, July 23, 2024

The Data That Powers A.I. Is Disappearing Fast; The New York Times, July 19, 2024

 Kevin Roose , The New York Times; The Data That Powers A.I. Is Disappearing Fast

"For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt."

Monday, July 22, 2024

Landlords Used Software to Set Rents. Then Came the Lawsuits.; The New York Times, July 19, 2024

Danielle Kaye, The New York Times ; Landlords Used Software to Set Rents. Then Came the Lawsuits.

"The use of the RealPage software in setting rents was the subject of a ProPublica investigation in 2022. Antitrust experts say the allegations in the lawsuits, if substantiated, paint a clear-cut picture of violations of federal antitrust law, which prohibits agreements among competitors to fix prices.

“There’s an emerging view that these exchanges of confidential business information raise significant competitive concerns,” said Peter Carstensen, an emeritus professor at the University of Wisconsin focused on antitrust law and competition policy. The use of algorithmic software, he added, “speeds up the coordination and makes it possible to coordinate many more players with really good information.”"

What Is The Future Of Intellectual Property In A Generative AI World?; Forbes, July 18, 2024

Ron Schmelzer, Forbes; What Is The Future Of Intellectual Property In A Generative AI World?

"Taking a More Sophisticated and Nuanced Approach to GenAI IP Issues

Clearly we’re at a crossroads when it comes to intellectual property and the answers aren’t cut and dry. Simply preventing IP protection of AI-generated works might not be possible if AI systems are used in any significant portion of the creation process. Likewise, prohibiting AI systems from making use of pre-existing IP-protected works might be a Pandora’s box we can’t close. We need to find new approaches that balance the ability to use AI tools as part of the creation process with IP protection of both existing works and the outputs of GenAI systems.

This means a more sophisticated and nuanced approach to clarifying the legal status of data used in AI training and developing mechanisms to ensure that AI-generated outputs respect existing IP rights, while still providing protection for creative outputs that have involved significant elements of human creativity in curation and prompting, even if the outputs are transformative recombinations of training data. Clearly we’re in the early days of the continued evolution of what intellectual property means."

This might be the most important job in AI; Business Insider, July 21, 2024

  , Business Insider; This might be the most important job in AI

"Generative AI can hallucinate, spread misinformation, and reinforce biases against marginalized groups if it's not managed properly. Given that the technology relies on volumes of sensitive data, the potential for data breaches is also high. At worst, though, there's the danger that the more sophisticated it becomes, the less likely it is to align with human values.

With great power, then, comes great responsibility, and companies that make money from generative AI must also ensure they regulate it.

That's where a chief ethics officer comes in...

Those who are successful in the role ideally have four areas of expertise, according to Mills. They should have a technical grasp over generative AI, experience building and deploying products, an understanding of the major laws and regulations around AI, and significant experience hiring and making decisions at an organization."

The Fast-Moving Race Between Gen-AI and Copyright Law; Baker Donelson, July 10, 2024

 Scott M. Douglass and Dominic Rota, Baker Donelson ; The Fast-Moving Race Between Gen-AI and Copyright Law

"It is still an open question whether plaintiffs will succeed in showing that use of copyrighted works to train generative AI constitutes copyright infringement and be able to overcome the fair use defense or succeed in showing that generative AI developers are removing CMI in violation of the DMCA.

The government has made some moves in the past few months to resolve these issues. The U.S. Copyright Office started an inquiry in August 2023, seeking public comments on copyright law and policy issues raised by AI systems, and Rep. Adam Schiff (D-Calif.) introduced a new bill in April 2024, that would require people creating a training dataset for a generative AI system to submit to the Register of Copyrights a detailed summary of any copyrighted works used in training. These initiatives will most likely take some time, meaning that currently pending litigation is vitally important for defining copyright law as it applies to generative AI.

Recent licensing deals with news publishers appear to be anywhere from $1 million to $60 million per year, meaning that AI companies will have to pay an enormous amount to license all the copyrighted works necessary to train their generative AI models effectively. However, as potential damages in a copyright infringement case could be billions of dollars, as claimed by Getty Images and other plaintiffs, developers of generative AI programs should seriously consider licensing any copyrighted works used as training data."

Friday, July 19, 2024

The Media Industry’s Race To License Content For AI; Forbes, July 18, 2024

  Bill Rosenblatt, Forbes; The Media Industry’s Race To License Content For AI

"AI content licensing initiatives abound. More and more media companies have reached license agreements with AI companies individually. Several startups have formed to aggregate content into large collections for AI platforms to license in one-stop shopping arrangements known in the jargon as blanket licenses. There are now so many such startups that last month they formed a trade association—the Dataset Providers Alliance—to organize them for advocacy.

Ironically, the growing volume of all this activity could jeopardize its value for copyright owners and AI platforms alike.

It will take years before the panoply of lawsuits yield any degree of clarity in the legal rules for copyright in the AI age; we’re in the second year of what is typically a decade-long process for copyright laws to adapt to disruptive technologies. One reason for copyright owners to organize now to provide licenses for AI is that—as we’ve learned from analogous situations in the past—both courts and Congress will consider is how easy it is for the AI companies to license content properly in determining whether licensing is required."

Thursday, July 18, 2024

Ethical AI: Tepper School Course Explores Responsible Business; Carnegie Mellon University Tepper School of Business, July 1, 2024

Carnegie Mellon University Tepper School of Business; Ethical AI: Tepper School Course Explores Responsible Business

"As artificial intelligence (AI) becomes more widely used, there is growing interest in the ethics of AI. A new article by Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University's Tepper School of Business, detailed a graduate course he developed titled "Ethics and AI." The course bridges AI-specific challenges with long-standing ethical discussions in business."

The Future of Ethics in AI: A Global Conversation in Hiroshima; JewishLink, July 18, 2024

 Rabbi Dr. Ari Berman; JewishLink The Future of Ethics in AI: A Global Conversation in Hiroshima

"Last week, I had the honor of representing the Jewish people at the AI Ethics for Peace Conference in Hiroshima, Japan, a three day conversation of global faith, political and industry leaders. The conference was held to promote the necessity of ethical guidelines for the future of artificial intelligence. It was quite an experience.

During the conference, I found myself sitting down for lunch with a Japanese Shinto Priest, a Zen Buddhist monk and a leader of the Muslim community from Singapore. Our conversation could not have been more interesting. The developers who devised AI can rightfully boast of many accomplishments, and they can now count among them the unintended effect of bringing together people of diverse backgrounds who are deeply concerned about the future their creators will bring.

AI promises great potential benefits, including global access to education and healthcare, medical breakthroughs, and greater predictability that will lead to efficiencies and a better quality of life, at a level unimaginable just a few years ago. But it also poses threats to the future of humanity, including deepfakes, structural biases in algorithms, a breakdown of human connectivity, and the deterioration of personal privacy."

Tech sector examines the risks and rewards of AI. How two paths converged at Northeastern’s London campus; Northeastern Global News, July 17, 2024

, Northeastern Global News; Tech sector examines the risks and rewards of AI. How two paths converged at Northeastern’s London campus

"Tess Buckley boarded a plane from Canada to a country and continent she had never before stepped foot on, carrying just a suitcase and a dream.

Three years later, she is living in London and fulfilling her ambition of being an artificial intelligence ethicist in her job with techUK, the sector’s trade association, where she is charged with helping to ensure the advanced technology is used with the right intentions.

Tracy Woods was already a senior figure at tech firm Cognizant when she started asking the same type of questions about the responsible use of AI. Like Buckley, she too looked to texts and lessons of the past, some dating back thousands of years, to help her pursue the answers to some very modern questions.

Buckley and Woods may have come from opposite corners of the globe and been at different stages of their careers but they ended up at the same place: Northeastern University’s philosophy and AI graduate program in London." 

An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.; The New York Times, July 18, 2024

Adam Satariano and Roser Toll Pifarré Photographs by Ana María Arévalo Gosen , The New York Times; An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.

"Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences."

Wednesday, July 17, 2024

IBM reaffirms its commitment to the Rome Call for AI ethics; IBM Research, July 15, 2024

 Mike Murphy, IBM Research; IBM reaffirms its commitment to the Rome Call for AI ethics

"There have been moments throughout history where the impacts of a new technology have been world-altering. Perhaps this is why the Vatican, along with leaders from most major religions across the world, chose to host a gathering to discuss the implications for future development of AI in Hiroshima, Japan.

Last year, representatives from the Abrahamic religions came together at the Vatican to sign the Rome Call for AI Ethics, which IBM first signed with other industry and government leaders when it was launched by the Vatican in 2020. It's a document where the signatories committed to pursue an ethical approach to AI development and promote the human-centric and inclusive development of AI, rather than replacing humanity.

At Hiroshima this year, the Rome Call was signed by representatives of many of the great Eastern religions, and past signees like IBM reaffirmed their commitment."

How Creators Are Facing Hateful Comments Head-On; The New York Times, July 11, 2024

Melina Delkic, The New York Times ; How Creators Are Facing Hateful Comments Head-On

"Experts in online behavior also say that the best approach is usually to ignore nasty comments, as hard as that may be.

“I think it’s helpful for people to keep in mind that hateful comments they see are typically posted by people who are the most extreme users,” said William Brady, an assistant professor at Northwestern University, whose research team studied online outrage by looking at 13 million tweets. He added that the instinct to “punish” someone can backfire.

“Giving a toxic user any engagement (view, like, share, comment) ironically can make their content more visible,” he wrote in an email. “For example, when people retweet toxic content in order to comment on it, they are actually increasing the visibility of the content they intend to criticize. But if it is ignored, algorithms are unlikely to pick them up and artificially spread them further.”"

Tuesday, July 16, 2024

Workday Loses Bid to Toss Bias Claims Over AI Hiring Tools; Bloomberg Law, July 13, 2024

Carmen Castro-Pagán, Bloomberg Law; Workday Loses Bid to Toss Bias Claims Over AI Hiring Tools 

"Workday Inc. must defend against a lawsuit alleging its algorithmic decision-making tools discriminate against job applicants who are Black, over the age of 40, or disabled, according to a federal court opinion on Friday.

The lawsuit adequately alleges that Workday is an agent of its client-employers, and thus falls within the definition of an employer for purposes of federal anti-discrimination laws that protect based on race, age, and disability, the US District Court for the Northern District of California said."

USPTO issues AI subject matter eligibility guidance; United States Patent and Trademark Office (USPTO), July 16, 2024

 United States Patent and Trademark Office (USPTO) ; USPTO issues AI subject matter eligibility guidance

"The U.S. Patent and Trademark Office (USPTO) has issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including in artificial intelligence (AI). This guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 § U.S.C. 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies. 

The guidance update, which goes into effect on July 17, 2024, provides a background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. 

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.” 

The three new examples provide additional analyses under 35 § U.S.C. 101 of hypothetical claims in certain situations to address particular inquiries, such as whether a claim recites an abstract idea or whether a claim integrates the abstract idea into a practical application. They are intended to assist USPTO personnel in applying the USPTO’s subject matter eligibility guidance to AI inventions during patent examination, appeal, and post-grant proceedings. The examples are available on our AI-related resources webpage and our patent eligibility page on our website.  

The USPTO continues to be directly involved in the development of legal and policy measures related to the impact of AI on all forms of intellectual property. The guidance update delivers on the USPTO’s obligations under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence o provide guidance to examiners and the public on the impact of AI and issues at the intersection of AI and IP, including patent subject matter eligibility. This follows our announcement earlier this year on Inventorship guidance for AI-assisted inventions, as well as AI guidance for practitioners and a request for comments on the impact of AI on certain patentability considerations, including what qualifies as prior art and the assessment of the level of ordinary skills in the art (comments accepted until July 29, 2024). 

The full text of the guidance update on patent subject matter eligibility is available on our Latest AI news and reports webpageand the corresponding examples are available on our AI-related resources webpage. The USPTO will accept public comments on the guidance update and the examples through September 16, 2024. Please see the Federal Register Notice for instructions on submitting comments."

Even Disinformation Experts Don’t Know How to Stop It; The New York Times, July 11, 2024

Tiffany Hsu and , The New York Times; Even Disinformation Experts Don’t Know How to Stop It

"Holding the line against misinformation and disinformation is demoralizing and sometimes dangerous work, requiring an unusual degree of optimism and doggedness. Increasingly, however, even the most committed warriors are feeling overwhelmed by the onslaught of false and misleading content online."

Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems; American Bar Association (ABA), June 3, 2024

Gary Rhoades , American Bar Association (ABA); Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems

"The Civil Rights Act of 1968, also known as the Fair Housing Act (FHA), banned housing discrimination nationwide on the basis of race, religion, national origin, and color. One key finding that persuaded Dr. Martin Luther King Jr., President Lyndon Johnson, and others to fight for years for the passage of this landmark law confirmed that many Americans were being denied rental housing because of their race. Black families were especially impacted by the discriminatory rejections. They were forced to move on and spend more time and money to find housing and often had to settle for substandard housing in unsafe neighborhoods and poor school districts to avoid homelessness.

April 2024 marked the 56th year of the FHA’s attempt to end such unfair treatment. Despite the law’s broadly stated protections, its numerous state and local counterparts, and decades of enforcement, landlords’ use of high-tech algorithms for tenant screening threatens to erase the progress made. While employing algorithms to mine data such as criminal records, credit reports, and civil court records to make predictions about prospective tenants might partially remove the fallible human element, old and new biases, especially regarding race and source of income, still plague the screening results."

Corporate directors weigh AI ethics at first-of-its-kind forum; Harvard Gazette, July 11, 2024

Harvard Gazette; Corporate directors weigh AI ethics at first-of-its-kind forum

"As artificial intelligence surges, corporate directors face a set of urgent ethical considerations. What role can they play in fostering responsible practices for using AI in the workplace? Are they already using the bias-prone technology to sort through job applications?

At the inaugural Directors’ AI Ethics Forum, leaders from the business, government, and nonprofit sectors pondered these questions and more. Convening the group on the Harvard Business School campus was the Edmond & Lily Safra Center for Ethics’ Business AI Ethics research team, an initiative that promotes thoughtful approaches to the rapidly evolving technology."

Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86; Associated Press via The Guardian, July 15, 2024

 Associated Press via The Guardian; Peter Buxtun, whistleblower who exposed Tuskegee syphilis study, dies aged 86

"Peter Buxtun, the whistleblower who revealed that the US government allowed hundreds of Black men in rural Alabama to go untreated for syphilis in what became known as the Tuskegee study, has died. He was 86...

Buxtun is revered as a hero to public health scholars and ethicists for his role in bringing to light the most notorious medical research scandal in US history. Documents that Buxtun provided to the Associated Press, and its subsequent investigation and reporting, led to a public outcry that ended the study in 1972.

Forty years earlier, in 1932, federal scientists began studying 400 Black men in Tuskegee, Alabama, who were infected with syphilis. When antibiotics became available in the 1940s that could treat the disease, federal health officials ordered that the drugs be withheld. The study became an observation of how the disease ravaged the body over time...

In his complaints to federal health officials, he drew comparisons between the Tuskegee study and medical experiments Nazi doctors had conducted on Jews and other prisoners. Federal scientists did not believe they were guilty of the same kind of moral and ethical sins, but after the Tuskegee study was exposed, the government put in place new rules about how it conducts medical research. Today, the study is often blamed for the unwillingness of some African Americans to participate in medical research.

“Peter’s life experiences led him to immediately identify the study as morally indefensible and to seek justice in the form of treatment for the men. Ultimately, he could not relent,” said the CDC’s Pestorius."