Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Saturday, October 22, 2022

Opinion D.C. might ban ‘algorithmic discrimination.’ That’s good for everyone.; The Washington Post, October 7, 2022

 

 and 
, The Washington Post; 
D.C. might ban ‘algorithmic discrimination.’ That’s good for everyone.

"If civil rights protections are to keep pace with this kind of technological threat to equality, they require an updated legal framework. We urge the D.C. Council to pass the Stop Discrimination by Algorithms Act and hope that other states and federal legislators soon follow."

Friday, August 26, 2022

AI Creating 'Art' Is An Ethical And Copyright Nightmare; Kotaku, August 25, 2022

Luke Plunkett , Kotaku; AI Creating 'Art' Is An Ethical And Copyright Nightmare

If a machine makes art, is it even art? And what does this mean for actual artists?

"Basically, we now live in a world where machines have been fed millions upon millions of pieces of human endeavour, and are now using the cumulative data they’ve amassed to create their own works. This has been fun for casual users and interesting for tech enthusiasts, sure, but it has also created an ethical and copyright black hole, where everyone from artists to lawyers to engineers has very strong opinions on what this all means, for their jobs and for the nature of art itself."

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Thursday, May 26, 2022

AI Ethics And The Quagmire Of Whether You Have A Legal Right To Know Of AI Inferences About You, Including Those Via AI-Based Self-Driving Cars; Forbes, May 25, 2022

Lance Eliot, Forbes; AI Ethics And The Quagmire Of Whether You Have A Legal Right To Know Of AI Inferences About You, Including Those Via AI-Based Self-Driving Cars

"Speaking of sitting, please sit down for the next eye-opening statement about this. You might be unpleasantly surprised to know that those AI inferences are not readily or customarily a core part of your legal rights per se, at least not as you might have naturally presumed that they were. An ongoing legal and ethical debate is still underway about the nature of AI-based inferences, including some experts that insist AI inferences are emphatically a central aspect of your personal data and other experts strenuously counterargue that AI inferences are assuredly not at all in the realm of so-called personal data (the catchphrase of “personal data” is usually the cornerstone around which data-related legal rights are shaped).

All of this raises a lot of societal challenges. AI is becoming more and more pervasive throughout society. The odds are that AI is making lots and lots of inferences about us all. Are we allowing ourselves to be vulnerable and at the mercy of these AI-based inferences? Should we be clamoring to make sure that AI inferences are within our scope of data-related rights? These questions are being bandied around by experts in the law and likewise by expert ethicists. For my ongoing coverage of AI Ethics and Ethical AI topics, see the link here and the link here, just to name a few."

Monday, May 2, 2022

AI Ethics Battling Stubborn Myth That AI Is Infallible, Including That Autonomous Self-Driving Cars Are Going To Be Unfailing And Error-Free; Forbes, May 2, 2022

 Lance Eliot, Forbes; AI Ethics Battling Stubborn Myth That AI Is Infallible, Including That Autonomous Self-Driving Cars Are Going To Be Unfailing And Error-Free

"Consider what ominously will happen if some people begin to believe in the AI infallibility myth with regards to AI-based self-driving cars. Pedestrians brainwashed with that belief would certainly be willing to wander directly into the path of an oncoming driverless vehicle. In their minds, they believe that AI will ensure that the self-driving car does not strike them. Those pundits that are pushing the uncrashable narrative ought to be ashamed of their efforts in making such risky situations viable.

In short, the AI infallibility myth is both wrong and dangerous. 

On top of that, the AI infallibility myth is exasperatingly and scarily enduring."

Wednesday, March 23, 2022

COMIC: How a computer scientist fights bias in algorithms; NPR, TED Radio Hour Comics, March 14, 2022

VRENI STOLLBERGER, LA Johnson, NPR, TED Radio Hour Comics ; COMIC: How a computer scientist fights bias in algorithms

The ex-Google researcher staring down Big Tech; Politico, March 18, 2022

BRAKKTON BOOKER , Politico; The ex-Google researcher staring down Big Tech

"THE RECAST:  President Biden ran on a platform promising to root out inequities in federal agencies and programs. Has his administration done enough to tackle the issue of discriminatory AI?

GEBRU: I'm glad to see that some initiatives have started. I like that the Office Of Science And Technology Policy (OSTP), for instance, is filled with people I respect, like Alondra Nelson, who is now its head.

My biggest comment on this is that a lot of tech companies — all tech companies — actually, don't have to do any sort of test to prove that they're not putting out harmful products...

The burden of proof always seems to be on us...The burden of proof should be on these tech companies."

Saturday, February 5, 2022

Two members of Google’s Ethical AI group leave to join Timnit Gebru’s nonprofit; The Verge, February 2, 2022

Emma Roth, The Verge; Two members of Google’s Ethical AI group leave to join Timnit Gebru’s nonprofit

"Two members of Google’s Ethical AI group have announced their departures from the company, according to a report from Bloomberg. Senior researcher Alex Hanna, and software engineer Dylan Baker, will join Timnit Gebru’s nonprofit research institute, Distributed AI Research (DAIR)...

In a post announcing her resignation on Medium, Hanna criticizes the “toxic” work environment at Google, and draws attention to a lack of representation of Black women at the company."

Thursday, January 20, 2022

At Google Cloud, A.I. ethics requires ‘Iced Tea’ and ‘Lemonaid’; Fortune, January 11, 2022

  

"For now, Moore says, the best safeguard is very careful human review. It is up to people to ask tough questions about the ethics of how the system is going to be used and also to think hard about both the abuse of such a system and about what the unintended consequences might be. This needs to be combined with careful testing to find the system’s biases and potential failure points."

Saturday, January 15, 2022

We’re failing at the ethics of AI. Here’s how we make real impact; World Economic Forum, January 14, 2022

Friday, December 31, 2021

Driverless Cars And AI Ethics; Forbes, December 29, 2021

Cindy Gordon, Forbes; Driverless Cars And AI Ethics


"One of the most validated research surveys of machine ethics1, called the Moral Machine, was published in Nature, found that many of the moral principles that guide a driver’s decisions vary by country. This reinforces a regulatory set of complexities to navigate on cultural preferences. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions were less likely to spare a pedestrian who stepped into traffic illegally. There were over 13 scenarios identified where a death could not be avoided and respondents had to make a decision on the impacts to old, rich, poor, more people or less people being killed. The research found that there are cultural variances in public preferences which governments and self-driving cars would need to take into account to gain varying jurisdictional confidence.

In other words different rules for countries would need to apply. Talk about moral and ethical complexities in design engineering. If this , then this etc."

Sunday, November 28, 2021

193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence; UN News, November 25, 2021

UN News; 193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence

"Artificial intelligence is present in everyday life, from booking flights and applying for loans to steering driverless cars. It is also used in specialized fields such as cancer screening or to help create inclusive environments for the disabled.

According to UNESCOAI is also supporting the decision-making of governments and the private sector, as well as helping combat global problems such as climate change and world hunger.

However, the agency warns that the technology ‘is bringing unprecedented challenges’.

We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Articificial Intellegence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues”, UNESCO explained in a statement.

Considering this, the adopted text aims to guide the construction of the necessary legal infrastructure to ensure the ethical development of this technology.

“The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member states in its implementation and ask them to report regularly on their progress and practices”, said UNESCO chief Audrey Azoulay."

Thursday, October 28, 2021

This Program Can Give AI a Sense of Ethics—Sometimes; Wired, October 28, 2021

 ,Wired; This Program Can Give AI a Sense of Ethics—Sometimes

"Frost says the debate around Delphi reflects a broader question that the tech industry is wrestling with—how to build technology responsibly. Too often, he says, when it comes to content moderation, misinformation, and algorithmic bias, companies try to wash their hands of the problem by arguing that all technology can be used for good and bad.

When it comes to ethics, “there’s no ground truth, and sometimes tech companies abdicate responsibility because there’s no ground truth,” Frost says. “The better approach is to try.”"

 

Wednesday, October 27, 2021

Ethics By Design: Steps To Prepare For AI Rules Changes; Forbes, October 21, 2021

Ursula Morgenstern, Forbes; Ethics By Design: Steps To Prepare For AI Rules Changes


"Ethics by design is a different way of thinking for companies. In addition to considering profit-driven outcomes, companies will now need to assess the harm and impact of their practices and provide oversight to manage AI’s risks. 

The AI Act sets the stage for change. Proposed in April, the legislation aims at mitigating the harmful use of AI. It facilitates the transparent, ethical use of AI — and keeps machine intelligence under human control. The regulations would outlaw four AI technologies that cause physical and psychological harm: social scoring, dark-pattern AI, manipulation and real-time biometric identification systems. 

Equally important to companies are the act’s proposed penalties. Fines for noncompliance are significantly higher than those for the EU’s General Data Protection Regulation (GDPR), ranging up to 30 million Euros, or 6% of annual revenue. In contrast, the GDPR imposesfines of up to 20 million Euros or 4% of revenue."

Thursday, October 7, 2021

AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face; Emerging Tech Brew, October 4, 2021

Hayden Field, Emerging Tech Brew ; AI-ethics pioneer Margaret Mitchell on her five-year plan at open-source AI startup Hugging Face

"Hugging Face wants to bring these powerful tools to more people. Its mission: Help companies build, train, and deploy AI models—specifically natural language processing (NLP) systems—via its open-source tools, like Transformers and Datasets. It also offers pretrained models available for download and customization.

So what does it mean to play a part in “democratizing” these powerful NLP tools? We chatted with Mitchell about the split from Google, her plans for her new role, and her near-future predictions for responsible AI."

Tuesday, April 27, 2021

Stop talking about AI ethics. It’s time to talk about power.; MIT Technology Review, April 23, 2021

 , MIT Technology Review;

Stop talking about AI ethics. It’s time to talk about power.

"If there’s been a real trap in the tech sector for the last decade, it’s that the theory of change has always centered engineering. It’s always been, “If there’s a problem, there’s a tech fix for it.” And only recently are we starting to see that broaden out to “Oh, well, if there’s a problem, then regulation can fix it. Policymakers have a role.”

But I think we need to broaden that out even further. We have to say also: Where are the civil society groups, where are the activists, where are the advocates who are addressing issues of climate justice, labor rights, data protection? How do we include them in these discussions? How do we include affected communities?

In other words, how do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?

In that sense, this book is trying to de-center tech and starting to ask bigger questions around: What sort of world do we want to live in?""

Friday, April 16, 2021

Big Tech’s guide to talking about AI ethics; Wired, April 13, 2021

, Wired; Big Tech’s guide to talking about AI ethics

"AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in...

diversity, equity, and inclusion (ph) - The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them...

ethics board (ph) - A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).

ethics principles (ph) - A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI."

Sunday, January 24, 2021

What Buddhism can do for AI ethics; MIT Technology Review, January 6, 2021

Saturday, September 19, 2020

AI ethics groups are repeating one of society’s classic mistakes; MIT Technology Review, September 14, 2020

Too many councils and advisory boards still consist mostly of people based in Europe or the United States

"We believe these groups are well-intentioned and are doing worthwhile work. The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe."

Thursday, July 30, 2020

Study: Only 18% of data science students are learning about AI ethics; TNW, July 3, 2020

Thomas Macaulay, TNW; Study: Only 18% of data science students are learning about AI ethics
The neglect of AI ethics extends from universities to industry

"At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula."