Thursday, March 15, 2018

Can Higher Education Make Silicon Valley More Ethical?; Chronicle of Higher Education, March 14, 2018

Nell Gluckman, Chronicle of Higher Education; Can Higher Education Make Silicon Valley More Ethical?

"Jim Malazita, an assistant professor at Rensselaer Polytechnic Institute, hopes to infuse ethics lessons into core computer-science courses."...

"Q. You mentioned you’ve been getting some pushback.

A. I’ve had to do a lot of social work with computer-science faculty. The faculty were like, This sounds cool, but will they still be able to move on in computer science? We’re using different, messier data sets. Will they still understand the formal aspects of computing?

Q. What do you tell faculty members to convince them that this is a good use of your students’ time?

A. I use a couple of strategies that sometimes work, sometimes don’t. It’s surprisingly important to talk about my own technical expertise. I only moved into social science and humanities as a Ph.D. student. As an undergraduate, my degree was in digital media design. So you can trust me with this content.

It’s helpful to also cast it in terms of helping women and underrepresented-minority retention in computer science. These questions have an impact on all students, but especially women and underrepresented minorities who are used to having their voices marginalized. The faculty want those numbers up."

Thursday, March 8, 2018

Why Software Developers Should Take Ethics Into Consideration; InfoQ, March 8, 2018

Ben Linders, InfoQ; 

"Most of the software that influences the behavior of human beings wasn’t created with strong ethical constructs around it. Software developers should ask themselves ethical questions like "who does this affect?", "who could get hurt by this?", and "who does this disadvantage or advantage?", try to answer them, and be comfortable with questions they can’t answer yet.

Theo Schlossnagle, CEO of Circonus, spoke about professional ethics for software developers atQCon London 2018. InfoQ is covering this conference with Q&As, presentations, summaries, and articles.
InfoQ interviewed Schlossnagle about the importance of ethics, what developers can do to incorporate ethical considerations, and asked him what the consequences of unethical software should be.

InfoQ: What makes ethics important for software developers?
Theo Schlossnagle: If you look around, the vast majority of people that are working today in our industry, writing code, making decisions that impact users, haven’t had an intense ethics course in their life. They haven’t taken an ethics course in high school, they haven’t taken an ethics course in college. It doesn’t mean that they don’t know ethics, ethics are pretty innate in human beings. There’s a playbook for discussing ethics; there’s a playbook for contemplating them; there’s not a playbook for answering them.The question is what your mental model is for making yourself answer those questions instead of just avoiding them and pretend they don’t exist.
We have 30 years of software development, and the last 10 to 15 of those have been hyper-accelerated software development. We have software all over the place that influences the behavior of human beings, and we didn’t create that software with strong ethical constructs around it."

The technology industry needs a set of professional ethics; Baltimore Sun, March 8, 2018

"In a wider view, using an ethical framework in scientific enterprise disperses ethical principles throughout society; patients and consumers adopt these ethical standards and come to expect and even extend these standards to other endeavors.
But we have failed to develop an ethical framework when it comes to technology or to understand the impact new media would have on our behavior and societal relationships.
We need to examine the current landscape of ethics within the rapidly expanding technology sector. Just as scientific research has added requirements for classes in ethics in research, the tech sector must develop widespread ethical educational efforts. The lack of firm ethical principles allowed a serious disruption to our 2016 political election and is changing the brains of social media users and rapidly changing the workplace and our economy. What has become commonplace has become acceptable. Robots replace humans in jobs; testing of consumer behavior without consent is unquestioned; acceptability of facial and voice recognition is rarely challenged even though misuse and privacy issues are frightening; and vitriolic, divisive missives are the norm on social media."

Exploring AI ethics and accountability;, March 5, 2018

Nirvi Shah,; Exploring AI ethics and accountability

"In this special report on the future of artificial intelligence, we explore the technology’s implications. Are people ready to trust their lives to driverless cars? What about an AI doctor? Who’s to blame when price-setting algorithms work together to collude?

We also spoke to Armin Grunwald, an adviser to the German parliament tasked with mapping out the ethical implications of artificial intelligence. Grunwald, it turns out, has an answer to the trolley problem.

This article is part of the special report Confronting the Future of AI."

UT computer science adding ethics courses to curriculum; KXAN, March 5, 2018

Alyssa Goard, KXAN; UT computer science adding ethics courses to curriculum

"Barbary Brunner, CEO of the Austin Technology Council, believes that what these ethics courses at UT are “a really valuable thing.” She explained that as companies in the tech world search for new ways to disrupt old ideas, it’s important to look at the human implications of what they’re setting out to do.

“This may be where the university leads the industry and the industry wakes up and says, ‘Wow that’s really smart,'” Brunner said. “For Texas to become a real tech powerhouse– which I think it can become — it needs to engage in the same sort of collaboration between higher education and the technology community that you see in California, that you see in the Seattle area.”

Brunner hasn’t heard many overarching discussions of ethics within the Austin tech world, but knows that individual discussions about ethics are going on at many companies, especially those related to security and artificial intelligence.

In the long run she thinks that ethics training may become one of many qualities tech companies look for in the recent graduates they hire."

Wednesday, March 7, 2018

When it comes to this White House, the fish rots from the head; Washington Post, March 7, 2018

Jennifer Rubin, Washington Post; When it comes to this White House, the fish rots from the head

"“Make no mistake about it, if Trump does not fire Kellyanne Conway after THREE Hatch Act violations another redline will be crossed,” tweeted Norm Eisen, a former White House ethics counsel during the Obama administration. “He will be saying breaking the law does not matter — I will pardon away any sins.” Eisen added: “Well, it does matter, and the American people will not tolerate it.” Richard Painter, who was George W. Bush’s ethics counsel, weighed in as well. “In any other White House, a single major ethics violation would result in dismissal,” he wrote on Twitter. “This is her third, and all three within the same year. She needs to go.” But we surely know she won’t — at least not for this.

The expectation of compliance with the law and concern about the appearance of impropriety are entirely absent from this administration for one very simple reason: Trump has set the standard and the example. Don’t bother with the rules. If caught, just make up stuff."

Top priest shares ‘The Ten Commandments of A.I.’ for ethical computing; internet of business, February 28, 2018

Chris Middleton, internet of business; Top priest shares ‘The Ten Commandments of A.I.’ for ethical computing

"A senior clergyman and government advisor has written what he calls “the Ten Commandments of AI”, to ensure the technology is applied ethically and for social good.

AI has been put forward as the saviour of businesses and national economies, but how to ensure that the technology isn’t abused? The Rt Rev the Lord Bishop of Oxford (pictured below), a Member of the House of Lords Select Committee on Artificial Intelligence, set out his proposals at a policy debate in London, attended by representatives of government, academia, and the business world.

Speaking on 27 February at a Westminster eForum Keynote Seminar, Artificial Intelligence and Robotics: Innovation, Funding and Policy Priorities, the Bishop set out his ten-point plan, after chairing a debate on trust, ethics, and cybersecurity."

Ethics and AI conference launched by CMU, K&L Gates; Pittsburgh Business Times, March 6, 2018

, Pittsburgh Business Times; Ethics and AI conference launched by CMU, K&L Gates

"The inaugural Carnegie Mellon University-K&L Gates Conference on Ethics and Artificial Intelligence is slated for April 9-10.

Leaders from industry, academia and government will explore ethical issues surrounding emerging technologies at the two-day event in Pittsburgh."

ABA Webinar: Thursday, March 8, 2018

Webinar | March 8, 2018 | 1:00 PM - 2:00 PM ET‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

American Bar Association.


Tuesday, March 6, 2018

Here’s how Canada can be a global leader in ethical AI; The Conversation, February 22, 2018

The Conversation;    Here’s how Canada can be a global leader in ethical AI

"Putting Canada in the lead

Canada has a clear choice. Either it embraces the potential of being a leader in responsible AI, or it risks legitimating a race to the bottom where ethics, equity and justice are absent.
Better guidance for researchers on how the Canadian Charter of Rights and Freedomsrelates to AI research and development is a good first step. From there, Canada can create a just, equitable and stable foundation for a research agenda that situates the new technology within longstanding social institutions.
Canada also needs a more coordinated, inclusive national effort that prioritizes otherwise marginalized voices. These consultations will be key to positioning Canada as a beacon in this field.
Without these measures, Canada could lag behind. Europe is already drafting important new approaches to data protection. New York City launched a task force this fall to become a global leader on governing automated decision making. We hope this leads to active consultation with city agencies, academics across the sciences and the humanities as well as community groups, from Data for Black Lives to Picture the Homeless, and consideration of algorithmic impact assessments.
These initiatives should provide a helpful context as Canada develops its own governance strategy and works out how to include Indigenous knowledge within that.
If Canada develops a strong national strategy approach to AI governance that works across sectors and disciplines, it can lead at the global level.

The tyranny of algorithms is part of our lives: soon they could rate everything we do; Guardian, March 5, 2018

John Harris, Guardian; The tyranny of algorithms is part of our lives: soon they could rate everything we do

"The tyranny of algorithms is now an inbuilt part of our lives.

These systems are sprawling, often randomly connected, and often beyond logic. But viewed from another angle, they are also the potential constituent parts of comprehensive social credit systems, awaiting the moment at which they will be glued together. That point may yet come, thanks to the ever-expanding reach of the internet. If our phones and debit cards already leave a huge trail of data, the so-called internet of things is now increasing our informational footprint at speed...

Personal data and its endless uses form one of the most fundamental issues of our time, which boils down to the relationship between the individual and power, whether exercised by government or private organisations."


Andrew Whalen, Newsweek; 


"CBS and Paramount are unlikely to see things the same way. While Star Trek: Discovery press releases trumpet the “ideology and hope for the future that inspired a generation of dreamers and doers,” plans for streaming market domination depend upon exclusivity. The metaphor equating artistic expression and property has become so ingrained that companies regularly reduce their consumers to provisional licensees, subject to whatever controls the copyright holder decides upon, even long after the point of purchase.

Star Trek stands on the shoulders of giants. It exists because they plundered some of the most interesting stories and memes of science fiction, just as all science fiction writers do, to tell their own story. And to argue that when they did it that was the legitimate progress of art and whenever anyone else does it, it's theft, is pretty self-serving and kind of obviously bullshit,” Doctorow said. “It's a ridiculous thing for a law to ban something that ancient and fundamental to how we experience art.”

Countering the monopoly exercised by copyright holders will require a broader social realignment, under which people come to understand art as a shared cultural endowment, rather than product—a mindset beyond capital."

Manhattan teen cartoonist prompts review of Scholastic awards’ copyright rules; amNewYork, March 5, 2018

Nicole Brown, amNewYork; Manhattan teen cartoonist prompts review of Scholastic awards’ copyright rules

"“How come the @Scholastic @artandwriting award requires kids to sign over ‘irrevocable copyright’ if they win?! And why is it hidden in the ‘Terms & Conditions’ link that no one reads? Is it weird that I think that’s wrong?” [Sasha Matthews] wrote in December...

...[T]he ability to display the work could be granted through a license, Harvard law professor Lawrence Lessig said.

“Once you enter into a license to promote the work, you have all the permissions you need,” he told amNewYork. “That’s exactly what they could have done here, but rather than entering a license, they just grabbed the copyright.”

Matthews wrote about the copyright issue for a school assignment and got it published in February on the blog Boing Boing."

The dangers of digital things: Self-driving cars steer proposed laws on robotics and automation, ABA Journal, March 2018

Victor Li, ABA Journal; The dangers of digital things: Self-driving cars steer proposed laws on robotics and automation

"Some states are standing in a legal gray area. Pennsylvania, for example, is a training ground for Uber’s collaboration with Carnegie Mellon to deploy autonomous vehicles throughout Pittsburgh. At press time, Pennsylvania did not have a statute that speaks to the legality of driverless cars.

However, Roger Cohen, policy director at the Commonwealth of Pennsylvania Department of Transportation, says the state has long operated under the assumption that autonomous cars are allowed on public roadways—as long as a human driver is at the steering wheel ready to take over. PennDOT has taken the lead in promulgating policies relating to autonomous vehicles with the goal of their formal adoption into law.

“That policy was deemed to be a more effective tool for the public oversight of testing operations because of its ability to be flexible and nimble and rapid in responding to what are fast-moving, unpredictable, hard-to-anticipate new developments,” Cohen says.

As with Michigan, Cohen says time is of the essence, adding that although Pennsylvania’s regulatory structure has an important purpose, it generally takes one to two years to process feedback and review the rules. “That was deemed to be ineffective for emerging technology,” Cohen says.

Instead, PennDOT has been freed up to develop policies while collaborating with a wealth of stakeholders—including academics, sister agencies, lawyers, technology companies and members of the automotive industry. Cohen says bills are pending in both state legislative houses, and he is optimistic that they’ll be passed.

“When it comes to car accidents, we must drive down the death rate toward zero, which is our goal,” Cohen says. “We have a technology that gives us our best chance to do that. I think there are real issues concerning data ownership, data privacy and cybersecurity. But there’s every reason to be optimistic.”"

Monday, March 5, 2018

Elon Musk quits AI ethics research group; BBC, February 22, 2018

BBC; Elon Musk quits AI ethics research group

"Technology billionaire Elon Musk has quit the board of the research group he co-founded to look into the ethics of artificial intelligence.

In a blog post, OpenAI said the decision had been taken to avoid any conflict of interest as Mr Musk's electric car company, Tesla, became "more focused on AI".

He has been one of AI's most vocal critics, stressing the potential harms."

Sunday, March 4, 2018

Don’t forget how the movement that changed Hollywood started: With great reporting; Washington Post, March 4, 2018

Margaret Sullivan, Washington Post; Don’t forget how the movement that changed Hollywood started: With great reporting

"The world has changed since last year’s Oscars — and for the better.

So let’s not forget what got us there: great journalism.

Legacy media companies may be under constant criticism, and trust in the press may be at a low point.

But less than six months after the New York Times broke its first story about abusive film mogul Harvey Weinstein in early October — quickly followed by more revelations from the New Yorker magazine — American culture has been flipped on its head.

Nothing is the same: Not awards shows, not the corporate workplace, not national politics."

Bendis’ Take on Superman’s Truth, Justice & The American Way; Comic Book Resources, March 3, 2018

Anthony Couto, Comic Book Resources; Bendis’ Take on Superman’s Truth, Justice & The American Way

"Talking all things Superman at his spotlight panel for Emerald City Comic Con, Eisner Award-winning writer Brian Michael Bendis offered a renewed approach to a classic Superman motto: Truth, Justice and the American Way.

Bendis said he’s found new relevance in Superman’s “truth, justice and the American way” adage, which helped inspire him to take on the Man of Steel. “Truth is under siege in our society today,” Bendis continued. “Justice — we see it every day on video, justice is not being handed out to everybody. The American dream, that is also under siege. These things, that seemed cliche just five years ago, are now damn well worth fighting for.”"

Donald Trump Sure Has a Problem with Democracy; New York Times, March 4, 2018

Editorial Board, New York Times; Donald Trump Sure Has a Problem with Democracy

"Though George Washington was elected unanimously, he was always a reluctant president. He pursued a second term in 1792 only at the urging of his cabinet, and in 1796, when he insisted it was time to step down, he famously warned that not to do so risked a return to the very tyranny Americans had fought to overthrow...

Mr. Trump was surely joking about becoming president for life himself. But there can be little doubt now that he truly sees no danger in Mr. Xi’s “great” decision to extend his own rule until death. That craven reaction is in line with Mr. Trump’s consistent support and even admiration for men ruling with increasing brutal and autocratic methods — Vladimir Putin of Russia, Turkey’s Recep Tayyip Erdogan, Rodrigo Duterte in the Philippines, to name a few."

Ethics is What You Do When No One is Watching You; accountingweb, February 22, 2018

Craig W. Smalley, accountingweb; Ethics is What You Do When No One is Watching You

"The point is that you can basically feel when you are doing something that isn’t right, so quit doing it. If you are unsure, then look it up, see if it is unethical. However, just because you can do something doesn’t mean you should.

I explain ethics to my kids this way, “Ethics is what you do when NO ONE is watching you.” Think about that for a minute."

What price privacy when Apple gets into bed with China?; Guardian, March 4, 2018

John Naughton, Guardian; What price privacy when Apple gets into bed with China?

"Corporations can blather on all they like about corporate responsibility and human rights, but, in the end, maximising shareholder value is all that counts. And Apple is determined to get to that trillion-dollar valuation no matter what. So if you’re an Apple user in China, you now have a simple choice: junk your iPhone, iPad and fancy Macbook laptop; or accept that your autocratic rulers can access your data at their convenience. In which case, whatever you say, say nothing – as they used to say in Belfast."

A 'political hit job'? Why the alt-right is accusing big tech of censorship; Guardian, March 4, 2018

Jason Wilson, Guardian; A 'political hit job'? Why the alt-right is accusing big tech of censorship

"The cases help illustrate big tech’s current dilemma. While politicians and anti-racist campaigners are asking them to act more like publishers, and show some discernment about what they allow to be published, from the right, many are accusing them of censorious liberal bias, and demanding they wind back the few standards they have implemented. These protests have taken many forms, including the launch of alternative, rightwing social media platforms."

Saturday, March 3, 2018

Who needs ethics anyway? – Chips with Everything podcast; Guardian, March 2, 2018

[Podcast] Presented by  and produced by Guardian; 

 Who needs ethics anyway? – Chips with Everything podcast

"Technology companies seem to have a bad reputation at the moment. Whether through honest mistakes or more intentional oversights, the likes of Apple, Facebook, Google and Twitter have created distrust among consumers.

But as technology develops, and as we hand over more control to artificial intelligence and machines, it becomes difficult for developers to foresee the negative consequences or side-effects that might arise.
In October 2017, the AI company DeepMind, a subsidiary of Google, created an ethics group made up of employees and external experts called DeepMind Ethics & Society.
But are these groups any more than a PR strategy? And how can we train technology students to preempt an ethical disaster before they enter the workforce?
To discuss these issues, Jordan Erica Webber is joined by Dr Mariarosaria Taddeoof the Oxford Internet Institute, Prof Laura Norén of NYU and student Kandrea Wade."

Friday, March 2, 2018

4 Philosophy Professors Weigh In on Why The Good Place Is So Forking Funny — and Important; Popsugar, February 28, 2018

Gwendolyn Purdom, Popsugar; 4 Philosophy Professors Weigh In on Why The Good Place Is So Forking Funny — and Important

"There's a scene in the second season of The Good Place where, in order to illustrate the classic moral dilemma known as The Trolley Problem, the characters are forced to live it. The famous thought experiment, which asks different variations of whether you would steer an unstoppable trolley into one person to avoid killing five, has long been a go-to for ethics scholars — but watching the show's hilariously gory take on it brought the lesson to life in a way Agnes Callard, an associate professor of philosophy at the University of Chicago, hadn't considered before. "There's something very violent about the thought experiment itself, like, we're asking them to imagine murdering people," Callard told POPSUGAR. "And the show just takes that really seriously, like, 'OK, let's reallyimagine it.'"

It's just one of the ways tuning into the NBC sitcom has been a fun first for philosophy and ethics professors like Callard, who aren't used to seeing their area of expertise at the center of a hit network comedy. Callard and the three other philosophy professors/The Good Place fans we talked to said that while pop culture has always reflected on philosophical themes, they don't remember a show or movie ever examining specific theories and works this explicitly."

Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018

Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"

Never have we seen such chaos and corruption; Washington Post, March 1, 2018

Eugene Robinson, Washington Post; Never have we seen such chaos and corruption

"Any other president who displayed such cavalier disregard for previous policy positions and total ignorance of basic facts would have provoked an uproar. Trump barely gets a shrug. Nobody expects him to be consistent. Nobody expects him to know anything about anything. He is defining the presidency down in a way that we must not tolerate.

I spent years as a foreign correspondent in Latin America. To say we are being governed like a banana republic is an insult to banana republics. It’s that bad, and no one should pretend otherwise."

The Trump administration is in an unethical league of its own; Washington Post, March 1, 2018

Max Boot, Washington Post; The Trump administration is in an unethical league of its own

"One of the great non-mysteries of the Trump administration is why Cabinet members think they can behave like aristocrats at the court of the Sun King. The Department of Housing and Urban Development spent $31,000 for a dining set for Secretary Ben Carson’s office while programs for the poor were being slashed. The Environmental Protection Agency has been paying for Administrator Scott Pruitt to fly first class and be protected by a squadron of bodyguards so he doesn’t have to mix with the great unwashed in economy class. The Department of Veterans Affairs spent $122,334 for Secretary David Shulkin and his wife to take what looks like a pleasure trip to Europe last summer; Shulkin’s chief of staff is accused of doctoring emails and lying about what happened. The Department of Health and Human Services paid more than $400,000 for then-Secretary Tom Price to charter private aircraft — a scandal that forced his resignation. 

Why would Cabinet members act any differently when they are serving in the least ethical administration in our history? The “our” is important, because there have been more crooked regimes — but only in banana republics. The corruption and malfeasance of the Trump administration is unprecedented in U.S. history. The only points of comparison are the Gilded Age scandals of the Grant administration, Teapot Dome under the Harding administration, and Watergate and the bribe-taking of Vice President Spiro Agnew during the Nixon administration. But this administration is already in an unethical league of its own. The misconduct revealed during just one day this week — Wednesday — was worse than what presidents normally experience during an entire term...

Given the ethical direction set by this president, it’s a wonder that his Cabinet officers aren’t stealing spoons from their official dining rooms. Come to think of it, maybe someone should look into that."

A code of ethics to get scientists talking; Nature, February 27, 2018

Editorial, Nature; A code of ethics to get scientists talking

"“Pursuing the truth means following the research where it leads, rather than confirming an already formed opinion.”

That statement opens one of seven presentations in a ‘Code of Ethics for Researchers’ produced by a group of scientists convened by the World Economic Forum. These scientists, drawn from many countries, are all under 40 but well established in career terms, with decades of research and leadership ahead of them. This combination makes them well qualified to explore the realities and pressures of modern lab life, so their ideas deserve to be considered by the scientific community...

As the authors state, their purpose is to stimulate open conversations “to safeguard a positive and sound research environment”. Accordingly, Nature readers may do themselves and others some good by visiting and providing feedback. Even better, they might discuss the ideals expressed, and consider how to live up to them in their own lab, research institution or funding agency. We at Nature are trying to do so, too."