Showing posts with label ethics of AI. Show all posts
Showing posts with label ethics of AI. Show all posts

Thursday, March 7, 2019

Graduate students explore the ethics of artificial intelligence; Princeton University, February 28, 2019

Denise Valenti for the Office of Communications, Princeton University; Graduate students explore the ethics of artificial intelligence

"As artificial intelligence advances, the questions surrounding its use have become increasingly complex. To introduce students to the challenges the technology could present and to prepare them to engage in and lead conversations about its ethical use, the Graduate School this year is offering a Professional Learning Development Cohort titled “Ethics of AI.”

This cohort offering is part of the Graduate School’s larger commitment to equip students with skills they can apply across a full range of professional settings in which they may make important contributions after leaving Princeton.

Nineteen graduate students from various disciplines — including psychology, politics, mechanical and aerospace engineering, and quantitative and computational biology — are participating in the five-part learning series. Through presentations, case studies, readings and discussions, they are developing an awareness of the issues at stake and considering their application in real-world situations.

“A recurring theme I hear from leaders in the technology industry is that there is a growing need for people who can engage rigorously with fundamental ethical issues surrounding technological advances,” said Sarah-Jane Leslie, dean of the Graduate School. “A great many of Princeton’s graduate students are exceptionally well-placed to contribute precisely that robust ethical thinking, so we wanted to provide a forum for our students to deepen their knowledge of these issues.”"

Tuesday, January 29, 2019

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Sunday, January 27, 2019

Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019

Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?

"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...

Too often, we think only about increasing our competitiveness in terms of advancing the technology. But the effort can’t just be about making AI more powerful. It must also be about making sure AI has the right impact. AI’s greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare in which AI rapidly replaces human beings at many jobs and tasks. If we want to realize AI’s incredible potential, we must also advance AI in a way that increases the public’s confidence that AI benefits society. We must have a framework for addressing the impacts and the ethics.

What does an ethics-driven approach to AI look like?

It means asking not only whether AI be can used in certain circumstances, but should it?

Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.

As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."

 

Friday, June 15, 2018

The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster ; The Guardian, June 12, 2018

Editorial, The Guardian; The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster

"But in all these cases, the companies involved – which means the people who work for them – will be actively involved in maintaining, tweaking and improving the work. This opens an opportunity for consistent ethical pressure and for the attribution of responsibility to human beings and not to inanimate objects. Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. It is not the monster, but the good Dr Frankenstein we need to worry about most."

Monday, March 5, 2018

Elon Musk quits AI ethics research group; BBC, February 22, 2018

BBC; Elon Musk quits AI ethics research group

"Technology billionaire Elon Musk has quit the board of the research group he co-founded to look into the ethics of artificial intelligence.

In a blog post, OpenAI said the decision had been taken to avoid any conflict of interest as Mr Musk's electric car company, Tesla, became "more focused on AI".

He has been one of AI's most vocal critics, stressing the potential harms."