Showing posts with label fairness. Show all posts
Showing posts with label fairness. Show all posts

Tuesday, September 12, 2023

How industry experts are navigating the ethics of artificial intelligence; CNN, September 11, 2023

CNN; How industry experts are navigating the ethics of artificial intelligence

"CNN heads to one of the longest-running artificial intelligence conferences in the world, to explore how industry experts and tech companies are trying to develop AI that is fairer and more transparent."

Monday, July 3, 2023

Keeping true to the Declaration of Independence is a matter of ethics; Ventura County Star, July 2, 2023

Ed Jones, Ventura County Star; Keeping true to the Declaration of Independence is a matter of ethics

"How do we keep faith with Jefferson, Franklin and the other founders? Due to the imperfections in human nature, there is no foolproof way, but a good plan would be to have all levels of our government — national, state and local — adopt ethical training similar to that of elective office holders here in California. Periodically, they must participate in ethics training which assumes there are universal ethical values consisting of fairness, loyalty, compassion trustworthiness, and responsibility that transcend other considerations and should be adhered to. This training consists of biannual computer sessions in which they must solve real-life problems based on the aforementioned ethical values.

I believe a real danger for elected officials and voters as well is the idea that certain societal values are so vital, so crucial, that they transcend normal ethical practices. This might be termed an “ends — means philosophy,” the idea that the ends justify the means. Mohandas Gandhi, former leader of India, observed that “the means are the ends in a democracy and good ends cannot come from questionable means.” 

No matter how exemplary our Declaration of Independence and Constitution, we are still relying on human beings to fulfill their promise. Ever since the Supreme Court took the power of judicial review — the power to tell us what the Constitution means and, in the process, affirm certain laws by declaring them constitutional or removing others by declaring them unconstitutional — the judgement of nine people has had a profound effect on our society. Was the Supreme Court correct in 1973 by saying the Ninth Amendment guarantees pregnant women the right to an abortion, or was it correct in 2022 by saying it didn’t?

In the final analysis we must conclude that it will be well-intentioned, ethical citizens and their elected and appointed representatives who will ensure the equitable future of what Abraham Lincoln referred to as our “ongoing experiment in self-government.”"

Friday, June 30, 2023

AI ethics toolkit updated to include more assessment components; ZDNet, June 27, 2023

 Eileen Yu, ZDNet ; AI ethics toolkit updated to include more assessment components

"A software toolkit has been updated to help financial institutions cover more areas in evaluating their "responsible" use of artificial intelligence (AI). 

First launched in February last year, the assessment toolkit focuses on four key principles around fairness, ethics, accountability, and transparency -- collectively called FEAT. It offers a checklist and methodologies for businesses in the financial sector to define the objectives of their AI and data analytics use and identify potential bias.

The toolkit was developed by a consortium led by the Monetary Authority of Singapore (MAS) that compromises 31 industry players, including Bank of China, BNY Mellon, Google Cloud, Microsoft, Goldman Sachs, Visa, OCBC Bank, Amazon Web Services, IBM, and Citibank."

Thursday, June 15, 2023

Korea issues first AI ethics checklist; The Korea Times, June 14, 2023

Lee Kyung-min, The Korea Times; Korea issues first AI ethics checklist

"The government has outlined the first national standard on how to use artificial intelligence (AI) ethically, in a move to bolster the emerging industry's sustainability and enhance its global presence, the industry ministry said Wednesday.

Korea Agency for Technology and Standards (KATS), an organization affiliated with the Ministry of Trade, Industry and Energy, issued a checklist of possible ethical issues and reviewed factors to be referenced and considered by service developers, providers and users.

The considerations specified for report and review include ethical issues arising in the process of collecting and processing data, the designing and development of AI, and the provision of such services to customers. 

The guidelines contain considerations such as transparency, fairness, harmlessness, responsibility, privacy protection, convenience, autonomy, reliability, sustainability and solidarity-enhancing qualities."

Saturday, March 12, 2022

About WBUR's Ethics Guide; WBUR, March 10, 2022

WBUR; About WBUR's Ethics Guide

"The committee approached the guidelines from the vantage point of WBUR journalists and journalism — while acknowledging the importance of the ethical guidelines and standards that need to be understood and embraced by everyone who works or is associated with WBUR.

The committee used the NPR Ethics Handbook as a structural model and source text, adopted with a WBUR voice. They also addressed ethics issues from a 2021 perspective, recognizing that much has changed in the public media and journalism field since the NPR Handbook was first written a decade ago."

WBUR Ethics Guide PDFhttps://d279m997dpfwgl.cloudfront.net/wp/2022/03/WBUR-Ethics-Guidelines.pdf  

Wednesday, March 25, 2020

Research in the time of coronavirus: keep it ethical; STAT, March 2, 2020

Beatriz da Costa Thomé and Heidi Larson, STAT; Research in the time of coronavirus: keep it ethical

"To provide an ethical framework for research during fraught times, the Nuffield Council on Bioethics recently released the report “Research in global health emergencies: ethical issues,” which we co-authored with several colleagues. It intends to serve as a resource for funders, governments, research institutions, and researchers, among others.

The report offers what we’ve called an “ethical compass” to guide different actors in ensuring research is conducted ethically during global health emergencies. It draws attention to three moral values — equal respect, fairness, and helping reduce suffering — that should inspire and guide approaches to this kind of research."

Friday, March 20, 2020

We will need a coronavirus commission; The Washington Post, March 20, 2020



"We will need a commission on par with the 9/11 Commission when the immediate emergency is over. The commission will need full subpoena power and access to any government official and document it needs. Among the questions we need answered:

  • When was the president briefed?
  • What was he told about the coronavirus?
  • What steps did he take to prepare for the virus?
  • What other officials in the executive and legislative branches were aware of the threat? What did they do?
  • Why, until this week, was Trump downplaying the magnitude of the threat?
  • What precisely was the sequence of events that held up distribution of testing kits?
  • What resources were available that could have been tapped had governors, mayors and ordinary Americans understood the extent of the threat?
  • Who, if anyone, in government profited from advance knowledge of the threat?
  • What government structures or policies did the current administration make that impacted the response, either positively or negatively?
  • Why was the Defense Production Act not activated sooner?
  • Why were wealthy and famous individuals given tests when ordinary Americans still could not access them?"

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Tuesday, October 1, 2019

Metro’s ethics changes are welcome. But they’re only a start.; The Washington Post, September 29, 2019

Editorial Board, The Washington Post; Metro’s ethics changes are welcome. But they’re only a start.

"THE REPUTATION of former Metro chairman Jack Evans wasn’t the only thing that was tarnished amid the swirl of allegations that he used his public office to advance his private interests. Public trust in the Metro board was also badly shaken after it completely botched its handling of the allegations. It’s encouraging, then, that the board has taken a first step in its own rehabilitation by amending its code of ethics.
 
“The reforms will improve transparency, accountability and fairness of all parties,” board chairman Paul C. Smedberg said of revisions to the ethics policy that were approved on Thursday. The changes include a clearer definition of conflicts of interests, putting the transit agency’s inspector general in charge of investigations and opening the process to the public with requirements for written reports and discussions held in public."

Tuesday, January 15, 2019

Princeton collaboration brings new insights to the ethics of artificial intelligence; Princeton University, January 14, 2019

Molly Sharlach, Office of Engineering Communications, Princeton University; Princeton collaboration brings new insights to the ethics of artificial intelligence

"Should machines decide who gets a heart transplant? Or how long a person will stay in prison?

The growing use of artificial intelligence in both everyday life and life-altering decisions brings up complex questions of fairness, privacy and accountability. Surrendering human authority to machines raises concerns for many people. At the same time, AI technologies have the potential to help society move beyond human biases and make better use of limited resources.

Princeton Dialogues on AI and Ethics” is an interdisciplinary research project that addresses these issues, bringing engineers and policymakers into conversation with ethicists, philosophers and other scholars. At the project’s first workshop in fall 2017, watching these experts get together and share ideas was “like nothing I’d seen before,” said Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP). “There was a vision for what this collaboration could be that really locked into place.”

The project is a joint venture of CITP and the University Center for Human Values, which serves as “a forum that convenes scholars across the University to address questions of ethics and value” in diverse settings, said director Melissa Lane, the Class of 1943 Professor of Politics. Efforts have included a public conference, held in March 2018, as well as more specialized workshops beginning in 2017 that have convened experts to develop case studies, consider questions related to criminal justice, and draw lessons from the study of bioethics.

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” said Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs. He emphasized that the careful implementation of AI systems can be an opportunity “to achieve better outcomes with less bias and less risk. It’s important not to see this as an entirely negative situation.”"

Monday, December 17, 2018

It’s high time for media to enter the No Kellyanne Zone — and stay there; The Washington Post, December 17, 2018

Margaret Sullivan, The Washington Post; It’s high time for media to enter the No Kellyanne Zone — and stay there

"The news media continues — even now when it should know better — to be addicted to “both sides” journalism. In the name of fairness, objectivity and respect for the office of the presidency, it still seems to take Trump — along with his array of deceptive surrogates — at his word, while knowing full well that his word isn’t good.

When major news organizations publish tweets and news alerts that repeat falsehoods merely because the president uttered them, it’s the same kind of journalistic malpractice as offering a prime interview spot to Kellyanne Conway."

Digital Ethics: Data is the new forklift; Internet of Business, December 17, 2018

Joanna Goodman, Internet of Business; Digital Ethics: Data is the new forklift

"Joanna Goodman reports from last week’s Digital ethics summit.

Governments, national and international institutions and businesses must join forces to make sure that AI and emerging technology are deployed successfully and responsibly. This was the central message from TechUK’s second Digital Ethics Summit in London.
Antony Walker, TechUK’s deputy CEO set out the purpose of the summit: “How to deliver on the promise of tech that can provide benefits for people and society in a way that minimises harm”.
This sentiment was echoed throughout the day. Kate Rosenshine, data architect at Microsoft reminded us that data is not unbiased and inclusivity and fairness are critical to data-driven decision-making. She quoted Cathy Bessant, CTO of Bank of America:
Technologists cannot lose sight of how algorithms affect real people."

Thursday, August 30, 2018

N.Y. Mayor Taps Drexel Professor For First Algorithm Quality-Control Task Force; Drexel Now, June 4, 2018

Drexel Now; N.Y. Mayor Taps Drexel Professor For First Algorithm Quality-Control Task Force

"But how do we ensure that the algorithms are the impartial arbiters we expect them to be? Drexel University professor Julia Stoyanovich is part of the first group in the nation helping to answer this question in the biggest urban area in the world. New York Mayor Bill de Blasio tapped Stoyanovich to serve on the city’s Automated Decision Systems Task Force, a team charged with creating a process for reviewing algorithms through the lens of fairness, equity and accountability...

The [Automated Decision Systems] Task Force is the product of New York City’s algorithmic accountability law, which was passed in 2017 to ensure transparency in how the city uses automated decision systems. By 2019, the group must “provide recommendations about how agency automated decision systems data may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems,” according to one of the provisions of the law."

Monday, July 23, 2018

We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

"Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."

Tuesday, May 29, 2018

Controversy Hides Within US Copyright Bill; Intellectual Property Watch, May 29, 2018

Steven Seidenberg, Intellectual Property Watch; Controversy Hides Within US Copyright Bill

"In a time when partisanship runs wild in the USA and the country’s political parties can’t seem to agree on anything, the Music Modernization Act is exceptional. The MMA passed the House of Representatives on 25 April with unanimous support. And for good reason. Almost all the major stakeholders back this legislation, which will bring some badly needed changes to copyright law’s treatment of music streaming. But wrapped in the MMA is a previously separate bill – the CLASSICS Act – that has been attacked by many copyright law experts, is opposed by many librarians and archivists, and runs counter to policy previously endorsed by the US Copyright Office."

Wednesday, August 16, 2017

Hundreds mourn for Heather Heyer, killed during Nazi protest in Charlottesville; Washington Post, August 16, 2017

Ellie SilvermanArelis R. Hernández and Steve Hendrix, Washington Post; Hundreds mourn for Heather Heyer, killed during Nazi protest in Charlottesville

"“Thank you for making the word ‘hate’ more real,” said her law office coworker Feda Khateeb-Wilson. “But...thank you for making the word ‘love’ even stronger.”

In a packed old theater in the center of the quiet college town that has become a racial battleground, those who knew Heyer turned her memorial into a call for both understanding and action.

“They tried to kill my child to shut her up, but guess what, you just magnified her,” said her mother Susan Bro, sparking a cheering ovation from the packed auditorium, where Virginia Gov. Terry McAuliffe (D) and Sen. Tim Kaine (D-Va) were among the crowd.

“No father should ever have to do this,” said Mark Heyer, his voice breaking on a stage filled with flowers and images of the 32-year-old paralegal who was killed Saturday when a car plowed into a crowd of protestors gathered to oppose a white supremacist rally."

Friday, July 7, 2017

States consider tougher web privacy laws; Bloomberg via News Chief, July 6, 2017

Todd Shields, Bloomberg via News Chief; States consider tougher web privacy laws

"Soon after President Donald Trump took office with a pledge to cut regulations, Republicans in Congress killed an Obama-era rule restricting how broadband companies may use customer data such as web browsing histories.
But the rule may be finding new life in the states.
Lawmakers in almost two dozen state capitols are considering ways to bolster consumer privacy protections rolled back with Trump’s signature in April. The proposals being debated from New York to California would limit how AT&T, Verizon Communications and Comcast use subscribers’ data."

Tuesday, June 13, 2017

When a Computer Program Keeps You in Jail; New York Times, June 13, 2017

Rebecca Wexler, New York Times; When a Computer Program Keeps You in Jail

"The criminal justice system is becoming automated. At every stage — from policing and investigations to bail, evidence, sentencing and parole — computer systems play a role. Artificial intelligence deploys cops on the beat. Audio sensors generate gunshot alerts. Forensic analysts use probabilistic software programs to evaluate fingerprints, faces and DNA. Risk-assessment instruments help to determine who is incarcerated and for how long.

Technological advancement is, in theory, a welcome development. But in practice, aspects of automation are making the justice system less fair for criminal defendants.

The root of the problem is that automated criminal justice technologies are largely privately owned and sold for profit. The developers tend to view their technologies as trade secrets. As a result, they often refuse to disclose details about how their tools work, even to criminal defendants and their attorneys, even under a protective order, even in the controlled context of a criminal proceeding or parole hearing."

Friday, May 19, 2017

Americans Want More Say in the Privacy of Personal Data; Consumer Reports, May 18, 2017

Bree Fowler, Consumer Reports; Americans Want More Say in the Privacy of Personal Data

[Kip Currier: Take a look at Consumer Reports' latest survey data on U.S. consumers' concerns about privacy and their personal data: significant majorities want more control over what data is collected and more transparency (not less!) regarding what Internet service providers can and can't do with that personal data.

Then consider this May 18, 2017 disconnect: "The Federal Communications Commission (FCC), led by chairman Ajit Pai, voted two to one to start the formal process of dismantling “net neutrality” rules put in place in 2015."]

"The latest CR Consumer Voices survey reveals that people have been increasingly worried about the issue in 2017. Seventy percent of Americans lack confidence that their personal data is private and safe from distribution without their knowledge, according to the nationally representative survey of 1,007 adults conducted in April.

That number climbed from 65 percent since we first asked about the topic in January.

Respondents to the April survey also said they want more control over what data is collected. Ninety-two percent said that internet service providers, such as Comcast and Verizon, should be required to secure permission from users before selling or sharing their data. [Bold and larger font added for emphasis]

The same proportion thinks consumers should have the right to request a complete list of the data an internet service provider or website has collected about them.

Finally, respondents spoke out about how such data may be used to charge online shoppers different prices for the same goods and services—without consumers knowing about it. This kind of dynamic pricing can be based on factors from age to browsing history to home address. Sixty-five percent of respondents oppose the practice.

Though consumers say they want stronger privacy protections, federal actions are moving the rules in the opposite direction."

Saturday, December 17, 2016

Genetically engineered humans will arrive sooner than you think. And we're not ready.; Vox, 12/15/16

Sean Illing, Vox; Genetically engineered humans will arrive sooner than you think. And we're not ready. :
"Michael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising...
Sean Illing
I'm always amazed at how little technologists tend to think about the moral and political implications of their work. For example, it's hard to imagine how disruptive this kind of biotechnology will be to our sense of fairness and equity.
We should be very concerned about the societal risks that would emerge alongside these bioenhancement technologies. Because presumably, in the beginning at least, only rich people will have access to this technology, and I wonder what kind of disorder that could spawn.
Michael Bess
Well, let's put it this way: If only rich people have access to these technologies, then we have a very big problem, because it's going to take the kinds of inequalities that have been getting worse over recent decades, even in a rich country like ours, and make them much worse, and inscribe those inequalities into our very biology.
So it's going to be very hard for somebody to be born poor and bootstrap themselves up into a higher position in society when the upper echelons of society are not only enjoying the privileges of health and education and housing and all that, but are bioenhancing themselves to unprecedented levels of performance. That's going to render permanent and intractable the separation between rich and poor.
For me, then, one of the imperatives that's going to arise out of bioenhancement is we're going to have to, in a sense, become Sweden. We're going to have to find a way to socialize the benefits of these technologies and offer them, at least as an option, to all citizens.
Doing this in a rich country like ours is hard enough — the challenge of doing this on a planetary scale is far more daunting."