Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Thursday, April 30, 2020

AI researchers propose ‘bias bounties’ to put ethics principles into practice; VentureBeat, April 17, 2020

Khari Johnson, VentureBeat; AI researchers propose ‘bias bounties’ to put ethics principles into practice

"Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software.

This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say."

Friday, January 24, 2020

This App Is a Dangerous Invasion of Your Privacy—and the FBI Uses It; Popular Mechanics, January 22, 2020

, Popular Mechanics; This App Is a Dangerous Invasion of Your Privacy—and the FBI Uses It

"Even Google Wouldn't Build This

When companies like Google—which has received a ton of flack for taking government contracts to work on artificial intelligence solutions—won't even build an app, you know it's going to cause a stir. Back in 2011, former Google Chairman Eric Schmidt said a tool like Clearview AI's app was one of the few pieces of tech that the company wouldn't develop because it could be used "in a very bad way."

Facebook, for its part, developed something pretty similar to what Clearview AI offers, but at least had the foresight not to publicly release it. That application, developed between 2015 and 2016, allowed employees to identify colleagues and friends who had enabled facial recognition by pointing their phone cameras at their faces. Since then, the app has been discontinued.

Meanwhile, Clearview AI is nowhere near finished. Hidden in the app's code, which the New York Times evaluated, is programming language that could pair the app to augmented reality glasses, meaning that in the future, it's possible we could identify every person we see in real time.

Early Pushback

Perhaps the silver lining is that we found out about Clearview AI at all. Its public discovery—and accompanying criticism—have led to well-known organizations coming out as staunchly opposed to this kind of tech.

Fight for the Future tweeted that "an outright ban" on these AI tools is the only way to fix this privacy issue—not quirky jewelry or sunglasses that can help to protect your identity by confusing surveillance systems."

Wednesday, January 15, 2020

Ethics In AI: Why Values For Data Matter; Forbes, December 18, 2020

Marc Teerlink, SAP, Global Vice President of Intelligent Enterprise Solutions & Artificial Intelligence, Forbes; Ethics In AI: Why Values For Data Matter

"The Double-Edged Sword of AI and Predictive Analytics

This rising impact can be both a blessing and a concern. It is a blessing — for example when AI and Predictive analytics are using big data to monitor growing conditions, to help an individual farmer make everyday decisions that can determine if they will be able to feed their family (or not).
Yet it can also be real concern when biased information is applied at the outset, leading machines to make biased decisions, amplifying our human prejudices in a manner that is inherently unfair.

As Joaquim Bretcha, president of ESOMAR says, “technology is the reflection of the values, principles, interests and biases of its creators”...

What’s the takeaway from this? We need to apply and own governance principles that focus on providing transparency on how Artificial Intelligence and Predictive Analytics achieve its answer.

I will close by asking one question to ponder when thinking about how to treat data as an asset in your organization:

“How will machines know what we value if we don’t articulate (and own) what we value ourselves?” *

Dig deeper: Want to hear more on ethics in AI, transparency, and treating data as an asset? Watch Marc’s recent masterclass at Web Summit 2019 here

*Liberally borrowed from John C Havens “Heartificial Intelligence”"

Tuesday, January 14, 2020

‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground; Forbes, December 22, 2019

Joe McKendrick, Forbes; ‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground

"While artificial intelligence is the trend du jour across enterprises of all types, there’s still scant attention being paid to its ethical ramifications. Perhaps it’s time for people to step up and ask the hard questions. For enterprises, it’s time to bring together — or recruit — people who can ask the hard questions.

In one recent survey by Genesys, 54% of employers questioned say they are not troubled that AI could be used unethically by their companies as a whole or by individual employees. “Employees appear more relaxed than their bosses, with only 17% expressing concern about their companies,” the survey’s authors add...

Sandler and his co-authors focus on the importance of their final point, urging that organizations establish an AI ethics committee, comprised of stakeholders from across the enterprise — technical, legal, ethical, and organizational. This is still unexplored territory, they caution: “There are not yet data and AI ethics committees with established records of being effective and well-functioning, so there are no success models to serve as case-studies or best practices for how to design and implement them.”"

Monday, January 13, 2020

Troll Watch: AI Ethics; NPR, January 11, 2020

NPR; Troll Watch: AI Ethics

"NPR's Michel Martin speaks with The Washington Post's Drew Harwell about the ethical concerns posed by new AI technology."

"MICHEL MARTIN, HOST:

We're going to spend the next few minutes talking about developments in artificial intelligence or AI. This week, the Trump administration outlined its AI policy in a draft memo which encouraged federal agencies to, quote, "avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth," unquote. And at the Consumer Electronics Show, the annual technology showcase, U.S. Chief Technology Officer Michael Kratsios elaborated on the administration's approach, warning that overregulation could stifle industries. But this stance comes as companies are announcing some boundary pushing uses for AI, including to create composite images of fake people and to conduct background checks. And those uses are raising ethical issues.

So to hear more about this, we've called Drew Harwell. He covers artificial intelligence for The Washington Post. He's with us now. Drew, welcome. Thanks so much for joining us."

Tuesday, November 26, 2019

NYC wants a chief algorithm officer to counter bias, build transparency; Ars Technica, November 25, 2019

Kate Cox, Ars Technica; NYC wants a chief algorithm officer to counter bias, build transparency

"It takes a lot of automation to make the nation's largest city run, but it's easy for that kind of automation to perpetuate existing problems and fall unevenly on the residents it's supposed to serve. So to mitigate the harms and ideally increase the benefits, New York City has created a high-level city government position essentially to manage algorithms."

Thursday, November 21, 2019

Why Business Leaders Need to Understand Their Algorithms; Harvard Business Review, November 19, 2019

Mike Walsh, Harvard Business Review; Why Business Leaders Need to Understand Their Algorithms

"Leaders will be challenged by shareholders, customers, and regulators on what they optimize for. There will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination. Document your decisions carefully and make sure you understand, or at the very least trust, the algorithmic processes at the heart of your business.

Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as “the algorithm made me do it.”"

Wednesday, November 6, 2019

Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices; The Washington Post, November 6, 2019

Drew Harwell, The Washington Post; Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices

"The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices, saying the company’s use of unproven artificial intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers."

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Elisa Celis and the fight for fairness in artificial intelligence; Yale News, November 6, 2019

Jim Shelton, Yale News; Elisa Celis and the fight for fairness in artificial intelligence

"What can you tell us about the new undergraduate course you’re teaching at Yale?

It’s called “Data Science Ethics.” I came in with an idea of what I wanted to do, but I also wanted to incorporate a lot of feedback from students. The first week was spent asking: “What is normative ethics? How do we even go about thinking in terms of ethical decisions in this context?” With that foundation, we began talking about different areas where ethical questions come out, throughout the entire data science pipeline. Everything from how you collect data to the algorithms themselves and how they end up encoding these biases, and how the results of biased algorithms directly affect people. The goal is to introduce students to all the things they should have in their mind when talking about ethics in the technical sphere.

The class doesn’t require coding or technical background, because that allows students from other departments to participate. We have students from anthropology, sociology, and economics, and other departments, which broadens the discussion. That’s very valuable when grappling with these inherently interdisciplinary problems."

Monday, October 28, 2019

A.I. Regulation Is Coming Soon. Here’s What the Future May Hold; Fortune, October 24, 2019

David Meyer, Fortune; A.I. Regulation Is Coming Soon. Here’s What the Future May Hold

"Last year Angela Merkel’s government tasked a new Data Ethics Commission with producing recommendations for rules around algorithms and A.I. The group’s report landed Wednesday, packed with ideas for guiding the development of this new technology in a way that protects people from exploitation.

History tells us that German ideas around data tend to make their way onto the international stage...

So, what do those recommendations look like? In a word: tough."

Wednesday, October 23, 2019

A face-scanning algorithm increasingly decides whether you deserve the job; The Washington Post, October 22, 2019

Drew Harwell, The Washington Post; A face-scanning algorithm increasingly decides whether you deserve the job 

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

"“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York...

Loren Larsen, HireVue’s chief technology officer, said that such criticism is uninformed and that “most AI researchers have a limited understanding” of the psychology behind how workers think and behave...

“People are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are,” he told The Washington Post. “Algorithms eliminate most of that in a way that hasn’t been possible before.”...

HireVue’s growth, however, is running into some regulatory snags. In August, Illinois Gov. J.B. Pritzker (D) signed a first-in-the-nation law that will force employers to tell job applicants how their AI-hiring system works and get their consent before running them through the test. The measure, which HireVue said it supports, will take effect Jan. 1."

Monday, October 14, 2019

Artificial Intelligence Moving to Battlefield as Ethics Weighed; Bloomberg Government, October 10, 2019

, Bloomberg Government; Artificial Intelligence Moving to Battlefield as Ethics Weighed

"The Pentagon, taking the next big step of deploying artificial intelligence to aid troops and help select battlefield targets, must settle lingering ethical concerns about using the technology for waging war...

Ethical uses of the technology could include the development of landmines, similar to the Claymore mines used by the U.S. in Vietnam, that can distinguish between adults carrying weapons and children, the nonprofit research group the Mitre Corp. told the defense board at a public hearing at Carnegie Mellon University in March.

Shanahan’s center first employed the technology to help fight wildfires in California and elsewhere and has discussed humanitarian relief uses in the Pacific with Japan and Singapore. Much of the potential for military artificial intelligence lies outside direct battlefield operations in areas such as logistics and accounting."

Wednesday, September 4, 2019

'Sense of urgency', as top tech players seek AI ethical rules; techxplore.com, September 2, 2019

techxplore.com; 'Sense of urgency', as top tech players seek AI ethical rules

"Some two dozen high-ranking representatives of the global and Swiss economies, as well as scientists and academics, met in Geneva for the first Swiss Global Digital Summit aimed at seeking agreement on ethical guidelines to steer ...

Microsoft president Brad Smith insisted on the importance that "technology be guided by values, and that those values be translated into principles and that those principles be pursued by concrete steps."

"We are the first generation of people who have the power to build machines with the capability to make decisions that have in the past only been made by people," he told reporters.

He stressed the need for "transparency" and "accountability ... to ensure that the people who create technology, including at companies like the one I work for remain accountable to the public at large."

"We need to start taking steps (towards ethical standards) with a sense of urgency," he said."

MIT developed a course to teach tweens about the ethics of AI; Quartz, September 4, 2019

Jenny Anderson, Quartz; MIT developed a course to teach tweens about the ethics of AI

"This summer, Blakeley Payne, a graduate student at MIT, ran a week-long course on ethics in artificial intelligence for 10-14 year olds. In one exercise, she asked the group what they thought YouTube’s recommendation algorithm was used for.

“To get us to see more ads,” one student replied.

“These kids know way more than we give them credit for,” Payne said.

Payne created an open source, middle-school AI ethics curriculum to make kids aware of how AI systems mediate their everyday lives, from YouTube and Amazon’s Alexa to Google search and social media. By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers."

Thursday, August 29, 2019

New Research Alliance Cements Split on AI Ethics; Inside Higher Ed, August 23, 2019

David Matthews, Inside Higher Ed; 

New Research Alliance Cements Split on AI Ethics


"Germany, France and Japan have joined forces to fund research into “human-centered” artificial intelligence that aims to respect privacy and transparency, in the latest sign of a global split with the U.S. and China over the ethics of AI."

Monday, April 22, 2019

Tech giants are seeking help on AI ethics. Where they seek it matters; Quartz, March 30, 2019

Dave Gershgorn, Quartz; Tech giants are seeking help on AI ethics. Where they seek it matters

"Meanwhile, as Quartz reported last week, Stanford’s new Institute for Human-Centered Artificial Intelligence excluded from its faculty any significant number of people of color, some of whom have played key roles in creating the field of AI ethics and algorithmic accountability.

Other tech companies are also seeking input on AI ethics, including Amazon, which this week announced a $10 million grant in partnership with the National Science Foundation. The funding will support research into fairness in AI."

Sunday, April 14, 2019

Europe's Quest For Ethics In Artificial Intelligence; Forbes, April 11, 2019

Andrea Renda, Forbes; Europe's Quest For Ethics In Artificial Intelligence

"This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced."

Thursday, April 4, 2019

THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

James Vincent, The Verge; 

THE PROBLEM WITH AI ETHICS

Is Big Tech’s embrace of AI ethics boards actually helping anyone?


"Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

Friday, March 29, 2019

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley; The Guardian, March 29, 2019

Sam Levin, The Guardian; 'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

"“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”

Universities and ethics boards could play a vital role in counteracting these trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.”

It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.

The Partnership on AI, an ethics-focused industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountability, transparency, and ethics in AI” also excludes black voices."