Showing posts with label fairness. Show all posts
Showing posts with label fairness. Show all posts

Thursday, July 18, 2024

Ethical AI: Tepper School Course Explores Responsible Business; Carnegie Mellon University Tepper School of Business, July 1, 2024

Carnegie Mellon University Tepper School of Business; Ethical AI: Tepper School Course Explores Responsible Business

"As artificial intelligence (AI) becomes more widely used, there is growing interest in the ethics of AI. A new article by Derek Leben, associate teaching professor of business ethics at Carnegie Mellon University's Tepper School of Business, detailed a graduate course he developed titled "Ethics and AI." The course bridges AI-specific challenges with long-standing ethical discussions in business."

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Monday, June 24, 2024

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Tuesday, September 12, 2023

How industry experts are navigating the ethics of artificial intelligence; CNN, September 11, 2023

CNN; How industry experts are navigating the ethics of artificial intelligence

"CNN heads to one of the longest-running artificial intelligence conferences in the world, to explore how industry experts and tech companies are trying to develop AI that is fairer and more transparent."

Monday, July 3, 2023

Keeping true to the Declaration of Independence is a matter of ethics; Ventura County Star, July 2, 2023

Ed Jones, Ventura County Star; Keeping true to the Declaration of Independence is a matter of ethics

"How do we keep faith with Jefferson, Franklin and the other founders? Due to the imperfections in human nature, there is no foolproof way, but a good plan would be to have all levels of our government — national, state and local — adopt ethical training similar to that of elective office holders here in California. Periodically, they must participate in ethics training which assumes there are universal ethical values consisting of fairness, loyalty, compassion trustworthiness, and responsibility that transcend other considerations and should be adhered to. This training consists of biannual computer sessions in which they must solve real-life problems based on the aforementioned ethical values.

I believe a real danger for elected officials and voters as well is the idea that certain societal values are so vital, so crucial, that they transcend normal ethical practices. This might be termed an “ends — means philosophy,” the idea that the ends justify the means. Mohandas Gandhi, former leader of India, observed that “the means are the ends in a democracy and good ends cannot come from questionable means.” 

No matter how exemplary our Declaration of Independence and Constitution, we are still relying on human beings to fulfill their promise. Ever since the Supreme Court took the power of judicial review — the power to tell us what the Constitution means and, in the process, affirm certain laws by declaring them constitutional or removing others by declaring them unconstitutional — the judgement of nine people has had a profound effect on our society. Was the Supreme Court correct in 1973 by saying the Ninth Amendment guarantees pregnant women the right to an abortion, or was it correct in 2022 by saying it didn’t?

In the final analysis we must conclude that it will be well-intentioned, ethical citizens and their elected and appointed representatives who will ensure the equitable future of what Abraham Lincoln referred to as our “ongoing experiment in self-government.”"

Friday, June 30, 2023

AI ethics toolkit updated to include more assessment components; ZDNet, June 27, 2023

 Eileen Yu, ZDNet ; AI ethics toolkit updated to include more assessment components

"A software toolkit has been updated to help financial institutions cover more areas in evaluating their "responsible" use of artificial intelligence (AI). 

First launched in February last year, the assessment toolkit focuses on four key principles around fairness, ethics, accountability, and transparency -- collectively called FEAT. It offers a checklist and methodologies for businesses in the financial sector to define the objectives of their AI and data analytics use and identify potential bias.

The toolkit was developed by a consortium led by the Monetary Authority of Singapore (MAS) that compromises 31 industry players, including Bank of China, BNY Mellon, Google Cloud, Microsoft, Goldman Sachs, Visa, OCBC Bank, Amazon Web Services, IBM, and Citibank."

Thursday, June 15, 2023

Korea issues first AI ethics checklist; The Korea Times, June 14, 2023

Lee Kyung-min, The Korea Times; Korea issues first AI ethics checklist

"The government has outlined the first national standard on how to use artificial intelligence (AI) ethically, in a move to bolster the emerging industry's sustainability and enhance its global presence, the industry ministry said Wednesday.

Korea Agency for Technology and Standards (KATS), an organization affiliated with the Ministry of Trade, Industry and Energy, issued a checklist of possible ethical issues and reviewed factors to be referenced and considered by service developers, providers and users.

The considerations specified for report and review include ethical issues arising in the process of collecting and processing data, the designing and development of AI, and the provision of such services to customers. 

The guidelines contain considerations such as transparency, fairness, harmlessness, responsibility, privacy protection, convenience, autonomy, reliability, sustainability and solidarity-enhancing qualities."

Saturday, March 12, 2022

About WBUR's Ethics Guide; WBUR, March 10, 2022

WBUR; About WBUR's Ethics Guide

"The committee approached the guidelines from the vantage point of WBUR journalists and journalism — while acknowledging the importance of the ethical guidelines and standards that need to be understood and embraced by everyone who works or is associated with WBUR.

The committee used the NPR Ethics Handbook as a structural model and source text, adopted with a WBUR voice. They also addressed ethics issues from a 2021 perspective, recognizing that much has changed in the public media and journalism field since the NPR Handbook was first written a decade ago."

WBUR Ethics Guide PDFhttps://d279m997dpfwgl.cloudfront.net/wp/2022/03/WBUR-Ethics-Guidelines.pdf  

Wednesday, March 25, 2020

Research in the time of coronavirus: keep it ethical; STAT, March 2, 2020

Beatriz da Costa Thomé and Heidi Larson, STAT; Research in the time of coronavirus: keep it ethical

"To provide an ethical framework for research during fraught times, the Nuffield Council on Bioethics recently released the report “Research in global health emergencies: ethical issues,” which we co-authored with several colleagues. It intends to serve as a resource for funders, governments, research institutions, and researchers, among others.

The report offers what we’ve called an “ethical compass” to guide different actors in ensuring research is conducted ethically during global health emergencies. It draws attention to three moral values — equal respect, fairness, and helping reduce suffering — that should inspire and guide approaches to this kind of research."

Friday, March 20, 2020

We will need a coronavirus commission; The Washington Post, March 20, 2020



"We will need a commission on par with the 9/11 Commission when the immediate emergency is over. The commission will need full subpoena power and access to any government official and document it needs. Among the questions we need answered:

  • When was the president briefed?
  • What was he told about the coronavirus?
  • What steps did he take to prepare for the virus?
  • What other officials in the executive and legislative branches were aware of the threat? What did they do?
  • Why, until this week, was Trump downplaying the magnitude of the threat?
  • What precisely was the sequence of events that held up distribution of testing kits?
  • What resources were available that could have been tapped had governors, mayors and ordinary Americans understood the extent of the threat?
  • Who, if anyone, in government profited from advance knowledge of the threat?
  • What government structures or policies did the current administration make that impacted the response, either positively or negatively?
  • Why was the Defense Production Act not activated sooner?
  • Why were wealthy and famous individuals given tests when ordinary Americans still could not access them?"

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Tuesday, October 1, 2019

Metro’s ethics changes are welcome. But they’re only a start.; The Washington Post, September 29, 2019

Editorial Board, The Washington Post; Metro’s ethics changes are welcome. But they’re only a start.

"THE REPUTATION of former Metro chairman Jack Evans wasn’t the only thing that was tarnished amid the swirl of allegations that he used his public office to advance his private interests. Public trust in the Metro board was also badly shaken after it completely botched its handling of the allegations. It’s encouraging, then, that the board has taken a first step in its own rehabilitation by amending its code of ethics.
 
“The reforms will improve transparency, accountability and fairness of all parties,” board chairman Paul C. Smedberg said of revisions to the ethics policy that were approved on Thursday. The changes include a clearer definition of conflicts of interests, putting the transit agency’s inspector general in charge of investigations and opening the process to the public with requirements for written reports and discussions held in public."

Tuesday, January 15, 2019

Princeton collaboration brings new insights to the ethics of artificial intelligence; Princeton University, January 14, 2019

Molly Sharlach, Office of Engineering Communications, Princeton University; Princeton collaboration brings new insights to the ethics of artificial intelligence

"Should machines decide who gets a heart transplant? Or how long a person will stay in prison?

The growing use of artificial intelligence in both everyday life and life-altering decisions brings up complex questions of fairness, privacy and accountability. Surrendering human authority to machines raises concerns for many people. At the same time, AI technologies have the potential to help society move beyond human biases and make better use of limited resources.

Princeton Dialogues on AI and Ethics” is an interdisciplinary research project that addresses these issues, bringing engineers and policymakers into conversation with ethicists, philosophers and other scholars. At the project’s first workshop in fall 2017, watching these experts get together and share ideas was “like nothing I’d seen before,” said Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP). “There was a vision for what this collaboration could be that really locked into place.”

The project is a joint venture of CITP and the University Center for Human Values, which serves as “a forum that convenes scholars across the University to address questions of ethics and value” in diverse settings, said director Melissa Lane, the Class of 1943 Professor of Politics. Efforts have included a public conference, held in March 2018, as well as more specialized workshops beginning in 2017 that have convened experts to develop case studies, consider questions related to criminal justice, and draw lessons from the study of bioethics.

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” said Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs. He emphasized that the careful implementation of AI systems can be an opportunity “to achieve better outcomes with less bias and less risk. It’s important not to see this as an entirely negative situation.”"

Monday, December 17, 2018

It’s high time for media to enter the No Kellyanne Zone — and stay there; The Washington Post, December 17, 2018

Margaret Sullivan, The Washington Post; It’s high time for media to enter the No Kellyanne Zone — and stay there

"The news media continues — even now when it should know better — to be addicted to “both sides” journalism. In the name of fairness, objectivity and respect for the office of the presidency, it still seems to take Trump — along with his array of deceptive surrogates — at his word, while knowing full well that his word isn’t good.

When major news organizations publish tweets and news alerts that repeat falsehoods merely because the president uttered them, it’s the same kind of journalistic malpractice as offering a prime interview spot to Kellyanne Conway."

Digital Ethics: Data is the new forklift; Internet of Business, December 17, 2018

Joanna Goodman, Internet of Business; Digital Ethics: Data is the new forklift

"Joanna Goodman reports from last week’s Digital ethics summit.

Governments, national and international institutions and businesses must join forces to make sure that AI and emerging technology are deployed successfully and responsibly. This was the central message from TechUK’s second Digital Ethics Summit in London.
Antony Walker, TechUK’s deputy CEO set out the purpose of the summit: “How to deliver on the promise of tech that can provide benefits for people and society in a way that minimises harm”.
This sentiment was echoed throughout the day. Kate Rosenshine, data architect at Microsoft reminded us that data is not unbiased and inclusivity and fairness are critical to data-driven decision-making. She quoted Cathy Bessant, CTO of Bank of America:
Technologists cannot lose sight of how algorithms affect real people."

Thursday, August 30, 2018

N.Y. Mayor Taps Drexel Professor For First Algorithm Quality-Control Task Force; Drexel Now, June 4, 2018

Drexel Now; N.Y. Mayor Taps Drexel Professor For First Algorithm Quality-Control Task Force

"But how do we ensure that the algorithms are the impartial arbiters we expect them to be? Drexel University professor Julia Stoyanovich is part of the first group in the nation helping to answer this question in the biggest urban area in the world. New York Mayor Bill de Blasio tapped Stoyanovich to serve on the city’s Automated Decision Systems Task Force, a team charged with creating a process for reviewing algorithms through the lens of fairness, equity and accountability...

The [Automated Decision Systems] Task Force is the product of New York City’s algorithmic accountability law, which was passed in 2017 to ensure transparency in how the city uses automated decision systems. By 2019, the group must “provide recommendations about how agency automated decision systems data may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems,” according to one of the provisions of the law."

Monday, July 23, 2018

We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

"Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."

Tuesday, May 29, 2018

Controversy Hides Within US Copyright Bill; Intellectual Property Watch, May 29, 2018

Steven Seidenberg, Intellectual Property Watch; Controversy Hides Within US Copyright Bill

"In a time when partisanship runs wild in the USA and the country’s political parties can’t seem to agree on anything, the Music Modernization Act is exceptional. The MMA passed the House of Representatives on 25 April with unanimous support. And for good reason. Almost all the major stakeholders back this legislation, which will bring some badly needed changes to copyright law’s treatment of music streaming. But wrapped in the MMA is a previously separate bill – the CLASSICS Act – that has been attacked by many copyright law experts, is opposed by many librarians and archivists, and runs counter to policy previously endorsed by the US Copyright Office."

Wednesday, August 16, 2017

Hundreds mourn for Heather Heyer, killed during Nazi protest in Charlottesville; Washington Post, August 16, 2017

Ellie SilvermanArelis R. Hernández and Steve Hendrix, Washington Post; Hundreds mourn for Heather Heyer, killed during Nazi protest in Charlottesville

"“Thank you for making the word ‘hate’ more real,” said her law office coworker Feda Khateeb-Wilson. “But...thank you for making the word ‘love’ even stronger.”

In a packed old theater in the center of the quiet college town that has become a racial battleground, those who knew Heyer turned her memorial into a call for both understanding and action.

“They tried to kill my child to shut her up, but guess what, you just magnified her,” said her mother Susan Bro, sparking a cheering ovation from the packed auditorium, where Virginia Gov. Terry McAuliffe (D) and Sen. Tim Kaine (D-Va) were among the crowd.

“No father should ever have to do this,” said Mark Heyer, his voice breaking on a stage filled with flowers and images of the 32-year-old paralegal who was killed Saturday when a car plowed into a crowd of protestors gathered to oppose a white supremacist rally."