Monday, October 8, 2018

Jamal Khashoggi chose to tell the truth. It’s part of the reason he’s beloved.; The Washington Post, October 7, 2018

David Ignatius, The Washington Post; Jamal Khashoggi chose to tell the truth. It’s part of the reason he’s beloved.


[Kip Currier: As I've mentioned to a few people lately--including my book editor, as I finish up a chapter on truth for my ethics textbook--this is a particularly challenging time to tackle the topics of truth, facts, news, and information assessment. The example of Saudi dissident Jamal Khashoggi--"disappeared" and presumed killed--painfully demonstrates both the importance of and potentially deadly stakes for those committed to promoting freedom of expression and truth telling, in the furtherance of human rights, equality, and democratic values.]

"George Orwell titled a regular column he wrote for a British newspaper in the mid-1940s “As I Please.” Meaning that he would write exactly what he believed. My Saudi colleague Jamal Khashoggi has always had that same insistent passion for telling the truth about his country, no matter what.

Khashoggi’s fate is unknown as I write, but his colleagues at The Post and friends around the world fear that he was murdered after he visited the Saudi consulate in Istanbul last Tuesday...

Khashogggi [sic] understood that he could keep his mouth shut and stay safe, because he had so many friends in the royal family. But it simply wasn’t in him.

Khashoggi wrote a column for the Post last year in which he described seeing some of his friends arrested and struggling with his conscience. “I said nothing. I didn’t want to lose my job or my freedom. I worried about my family. I have made a different choice now,” he wrote. He had made a decisive break with Mohammed bin Salman , choosing exile and honesty in his writings. His simple four-word explanation: “We Saudis deserve better.”"

From Orwell to ‘Little Mermaid,’ Kuwait Steps Up Book Banning; The New York Times, October 1, 2018

Rod Nordland, The New York Times;
From Orwell to ‘Little Mermaid,’ Kuwait Steps Up Book Banning

"At a bookstore in Kuwait City, the proprietor showed off a secret cupboard full of contraband books behind the cash register and a basement storeroom with even more. “It’s a cliché that book banning helps book sales,” she said. “As a bookseller, I can tell you I would much rather have the books out on display.”

The bookseller did have a banned copy of “Zorba the Greek” on display, discretely, since it could result in a minimum fine of about $1,650 if Ministry of Information inspectors saw it. She said she was not too worried. “You can always spot them when they come in,” she said. “You can tell they’re not readers.”"

Thursday, October 4, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence; Forbes, October 4, 2018

Nikita Malik, Forbes; We Need To Examine The Ethics And Governance Of Artificial Intelligence

"Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation." 

The push to create AI-friendly ethics codes is stripping all nuance from morality; Quartz, October 4, 2018

Olivia Goldhill, Quartz; The push to create AI-friendly ethics codes is stripping all nuance from morality

"A paper led by Veljko Dubljević, neuroethics researcher at North Carolina State University, published yesterday (Oct. 2) in PLOS ONE, claims to establish not just the answer to one ethical question, but the entire groundwork for how moral judgements are made.

According to the paper’s “Agent Deed Consequence model,” three things are taken into account when making a moral decision: the person doing the action, the moral action itself, and the consequences of that action. To test this theory, the researchers created moral scenarios that varied details about the agent, the action, and the consequences."

Data Science Institute prepares students for ethical decision-making; The Cavalier Daily (University of Virginia), October 4, 2018

Zoe Ziff, The Cavalier Daily (University of Virginia); Data Science Institute prepares students for ethical decision-making

"The University's Data Science Institute recently incorporated the new Center for Data Ethics and Justice — founded by the University’s Bioethics Chair Jarrett Zigon — in an effort to ramp up its focus on ethics in analysis and interpretation of data. This partnership has created a new course for graduate data science students that specifically addresses ethical issues related to the handling of data and advancement in technology. 

The DSI — located in Dell 1 and Dell 2 — is a research and academic institute that offers masters programs in data science as well as dual degrees in partnership with the Darden School of Business, the Medical School and the Nursing School. 

Phillip Bourne — director of the DSI and professor of biomedical engineering — regards ethics as a pillar of their graduate program. He said few data scientists have formal training in ethics, and the partnership with the Center will equip students with the tools to make ethical decisions throughout their careers. 

The Center brings a redefined course to the Master’s of Science in Data Science that is specifically designed for tackling ethical problems in the data science field."

Publishers Escalate Legal Battle Against ResearchGate; Inside Higher Ed, October 4, 2018

Lindsay McKenzie, Inside Higher Ed; Publishers Escalate Legal Battle Against ResearchGate

"The court documents, obtained by Inside Higher Ed from the U.S. District Court in Maryland, include an “illustrative” but “not exhaustive list” of 3,143 research articles the publishers say were shared by ResearchGate in breach of copyright protections. The publishers suggest they could be entitled to up to $150,000 for each infringed work -- a possible total of more than $470 million.

This latest legal challenge is the second that the publishers have filed against ResearchGate in the last year. The first lawsuit, filed in Germany in October 2017, is ongoing. Inside Higher Ed was unable to review court documents for the European lawsuit.

The U.S. lawsuit is the latest development in a long and increasingly complex dispute between some academic publishers and the networking site."

Wednesday, October 3, 2018

Why you need a code of ethics (and how to build one that sticks); CIO, September 17, 2018

Josh Fruhlinger, CIO; Why you need a code of ethics (and how to build one that sticks)

"Importance of a code of ethics

Most of us probably think of ourselves as ethical people. But within organizations built to maximize profits, many seemingly inevitably drift towards more dubious behavior, especially when it comes to user personal data. "More companies than not are collecting data just for the sake of collecting data, without having any reason as to why or what to do with it," says Philip Jones, a GDPR regulatory compliance expert at Capgemini. "Although this is an expensive and unethical approach, most businesses don’t think twice about it. I view this approach as one of the highest risks to companies today, because they have no clue where, how long, or how accurate much of their private data is on consumers."

This is the sort of organizational ethical drift that can arise in the absence of clear ethical guidelines—and it's the sort of drift that laws like the GDPR, the EU's stringent new framework for how companies must handle customer data, are meant to counter."

Saturday, September 29, 2018

How to Identify MBA Programs That Emphasize Ethics; U.S. News & World Report, September 27, 2018

Ilana Kowarski, U.S. News & World Report; How to Identify MBA Programs That Emphasize Ethics

""Students need to ask if ethics is integrated into all classes," Scott MacDonald, the director of the MBA program at the University of Dayton's School of Business Administration, wrote in an email. "Does the curriculum integrate ethics into finance, marketing, operations etc. or does the program just offer a stand-alone ethics class? How ingrained is ethics in the DNA of the school?”
 
Leigh Hafrey, a senior lecturer at the Massachusetts Institute of Technology Sloan School of Management who teaches courses on professional ethics, says discussion-based ethics courses are particularly valuable. "I have attended many very good lectures on ethics, and that certainly qualifies as a legitimate method to convey ethics instruction, but I do think that the opportunity to participate in the conversation makes a huge difference," Hafrey says. "It gives the student ownership … and invites them to take an active stance on issues of major concern to all of us and the organizations in which we work.""

Thursday, September 27, 2018

Why bother teaching drone pilots about ethics? It’s robots that will kill us; The Guardian, September 27, 2018

Andrew Brown, The Guardian; Why bother teaching drone pilots about ethics? It’s robots that will kill us

"The Church of England has just announced a programme to help RAF chaplains offer pastoral care and support to drone pilots. The unusual thing is that they are to study ethics and philosophy in this training."

92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built; Forbes, September 26, 2018

John Koetsier, Forbes; 92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

""Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.""

Cornell Food Researcher's Downfall Raises Larger Questions For Science; NPR, September 26, 2018

Brett Dahlberg, NPR; Cornell Food Researcher's Downfall Raises Larger Questions For Science

"The fall of a prominent food and marketing researcher may be a cautionary tale for scientists who are tempted to manipulate data and chase headlines.

Brian Wansink, the head of the Food and Brand Lab at Cornell University, announced last week that he would retire from the university at the end of the academic year. Less than 48 hours earlier, JAMA, a journal published by the American Medical Association, had retracted six of Wansink's studies, after Cornell told the journal's editors that Wansink had not kept the original data and the university could not vouch for the validity of his studies."

Wednesday, September 26, 2018

Star Trek: Marina Sirtis Reveals Why She Hasn't Watched 'Discovery'; Comicbook.com, September 22, 2018

Jamie Lovett, Comicbook.com; Star Trek: Marina Sirtis Reveals Why She Hasn't Watched 'Discovery'

"I actually think that Star Trek got it right in our show and in the original show because the shows were about something,” [Star Trek: The Next Generation's Marina Sirtis] said. “They weren’t just entertainment. They were little morality plays and that is what Star Trek lost after we were done. And it ought to go back to that.”

Your DNA Is Not Your Culture; The Atlantic, September 25, 2018

Sarah Zhang, The Atlantic; Your DNA Is Not Your Culture

"DNA, these marketing campaigns imply, reveals something essential about you. And it’s working. Thanks to television-ad blitzes and frequent holiday sales, genetic-ancestry tests have soared in popularity in the past two years. More than 15 million people have now traded their spit for insights into their family history.

If this were simply about wearing kilts or liking Ed Sheeran, these ads could be dismissed as, well, ads. They’re just trying to sell stuff, shrug. But marketing campaigns for genetic-ancestry tests also tap into the idea that DNA is deterministic, that genetic differences are meaningful. They trade in the prestige of genomic science, making DNA out to be far more important in our cultural identities than it is, in order to sell more stuff.

First, the accuracy of these tests is unproven (as detailed here and here). But putting that aside, consider simply what it means to get a surprise result of, say, 15 percent German. If you speak no German, celebrate no German traditions, have never cooked German food, and know no Germans, what connection is there, really? Cultural identity is the sum total of all of these experiences. DNA alone does not supersede it."

Tuesday, September 25, 2018

Open-access journal editors resign after alleged pressure to publish mediocre papers; Science, September 4, 2018

Jop de Vrieze, Science; Open-access journal editors resign after alleged pressure to publish mediocre papers

"The conflict is salient because this week 11 European national funding organizations announced that beginning in 2020, research they fund should only be published in open-access journals, which make articles publicly available, as opposed to traditional journals, which sometimes block access to nonsubscribers. To maintain a level of quality, scientists will be directed to publish only in journals in the Directory of Open Access Journals."

Thursday, September 20, 2018

Batman: Damned’s Digital Release Censors Bruce Wayne’s Naughty Bits; Comic Book Resources, September 19, 2018

Justin Carter, Comic Book Resources; Batman: Damned’s Digital Release Censors Bruce Wayne’s Naughty Bits

[Kip Currier: Tensions related to intellectual freedom, free expression, and censorship in comics and other media formats raise thorny questions about the nebulous distinctions sometimes made between content that is included and omitted in analog and digital formats. Case in point: the new comic, Batman: Damned. These kinds of decisions about free expression and censorship vis-a-vis analog and digital formats have implications for intellectual freedom and the historical record, as well as for diverse domains and activities involving creativity, knowledge, and research.

During a chat today with Mr. Wayne Wise, a graphic novels course instructor as well as a comics historian and creator with Pittsburgh's Oakland-based Phantom of the Attic, Mr. Wise flagged Brian K. Vaughan and Fiona Staples' popular comic book Saga as another example where free expression and censorship have come into conflict. The Comic Book Legal Defense Fund (CBLDF) has a summary of the 2013 Saga controversy here. A thought-provoking quote from that CBLDF case study lays out the larger implications of censorship and self-censorship in the digital age:

Although the removal of Saga #12 was temporary, the circumstances surrounding the case, including Apple’s vague and subjective content policy, lend themselves to a much larger and more frightening issue: To what extent does one need to self-censor in order to make their books available on digital platforms?"]

[Excerpt from Comic Book Resources article]

"CBR has been informed that, while Black Label is an imprint for mature readers, it was decided Bruce Wayne’s nudity was not additive to the story. Thus, the digital version blacked out the scenes. Additionally, CBR has confirmed that future printings of the issue will use the altered panels."

From Arabic to AI: the ancient roots of the algorithm; The Guardian, September 20, 2018

Steven Poole, The Guardian; From Arabic to AI: the ancient roots of the algorithm

"To this day, algorithm is still just a fancy name for a set of rules. If this, then that; if that is true, then do this. In finance, especially, the word is often shortened to “algo”, which via “algae” evokes a sense of inexorable biological growth. But perhaps if we thought of algorithms as mere humdrum flowcharts, drawn up by humans, we’d be better able to see where responsibility really lies if, or when, they go wrong."

Move Fast And (Don't) Break Things; Forbes, September 20, 2018

Eric Schrock, Forbes; Move Fast And (Don't) Break Things

[Kip Currier: Excellent points made by the author, underscoring the need for organizations of all kinds to provide and promote data ethics education and training within organizational cultures. As RuPaul would say, "Can I get an Amen up in here?!"]

"Integrate Data Ethics Training

The technology landscape is changing rapidly, and few employees are familiar with the ethical implications of new techniques. The applications of computer science are so diverse and varied that there’s no all-encompassing set of standards they can to look to. Navigating what’s right and wrong when you’re moving fast and under pressure to meet project deadlines can add a ton of pressure and be a recipe for data breach or misuse.

Companies have a duty to provide their employees with training, and we’re seeing it outside industries, too. At the University of Stanford, a joint initiative by the students in computer science, Social Good and the Stanford AI Group offer a course on the ethical implications of AI as a way to get future computer scientists and engineers to think about the role of ethics tied to the products they’re creating.

Despite the need to move fast, people need to have downtime to think about the work they’re doing and whether it addresses data privacy and security concerns. We need to stop talking about ethics only when a massive breach happens and instead ensure that they’re ingrained in workflows and across developer communities to help form broader professional standards. Companies should provide their employees with this on-the-job learning. 

In 2014, Facebook updated its motto to the less catchy “move fast with stable infrastructure."

Tuesday, September 18, 2018

SAP Becomes First European Tech Company to Create Ethics Advisory Panel for Artificial Intelligence; PR Newswire via Yahoo, September 18, 2018

PR Newswire via Yahoo; SAP Becomes First European Tech Company to Create Ethics Advisory Panel for Artificial Intelligence


""SAP considers the ethical use of data a core value," said Luka Mucic, Chief Financial Officer and Member of the Executive Board, SAP. "We want to create software that enables the intelligent enterprise and actually improves people's lives. Such principles will serve as the basis to make AI a technology that augments human talent."

SAP guiding principles reflect the company's commitment to comply with the highest ethical standards."

'One of the boys': lost narwhal finds new home with band of beluga whales; The Guardian, September 13, 2018

Greg Mercer, The Guardian; 'One of the boys': lost narwhal finds new home with band of beluga whales

[Kip Currier: Check out the fascinating video clip in this article too. Another example of how drones are being utilized for scientific research, such as wildlife monitoring and conservation, and yielding intriguing information and insights.]

"Whale researchers in Quebec’s St Lawrence River are celebrating a remarkable discovery: a juvenile narwhal far from its arctic home, that appears to have been adopted by a band of beluga whales.

The narwhal, more than 1,000km outside its typical range, was filmed by a drone swimming and playing with dozens of belugas that were treating it as one of their own."

Monday, September 17, 2018

Myanmar’s Assault on a Truthful Press; The New York Times, September 16, 2018

 Stephen J. Adler, The New York Times; Myanmar’s Assault on a Truthful Press

"Mr. Adler, the president and editor in chief of Reuters, sits on the board of the Committee to Protect Journalists."

"With the world’s nations preparing for the opening this week of the United Nations General Assembly, it is time to affirm not only the facts of this case but the value of facts themselves — to declare our certainty that some things are true and others are not. We must reject the cynical and dangerous idea that everyone is entitled to their own facts. We can see where this has gotten us in Myanmar and elsewhere. And we need to reaffirm the essential role of a free press in uncovering facts.

Journalists, being people, are imperfect. But journalism, done right, serves a high public purpose. It produces transparency in markets, holds governments and businesses to account, gives people tools to make well-informed decisions, uncovers wrongdoing, inspires reforms, and tells true and remarkable stories that move and inspire. The United Nations must insist that the suppression of a free press contradicts the very nature of democracy and cannot be tolerated. And other multinational institutions, alongside governments, should make it forcefully clear to Myanmar’s leaders that Wa Lone and Kyaw Soe Oo must be freed."

Sunday, September 16, 2018

Publishers Call Out Target for 'Censoring' Book Descriptions; Publishers Weekly, September 13, 2018

Claire Kirch, Publishers Weekly; Publishers Call Out Target for 'Censoring' Book Descriptions

"According to Ohio State University Press director Tony Sanfilippo, Target’s move might be a well-meaning policy gone awry. “I understand that they might want to avoid controversy. But if they want to keep Nazis off their site, or Nazi-themed products out of their search results, there are ways of doing that that don’t censor. If you can’t say 'Nazi,' you can’t stop Nazis. And if you can’t search for books about the trans community and trans issues, your search engine and your corporate philosophy are morally flawed.""

Once again, Putin gives us a lesson on the usefulness of the blatant lie; The Washington Post, September 14, 2018

Anne Applebaum, The Washington Post; Once again, Putin gives us a lesson on the usefulness of the blatant lie

"Why did she bother? Or, more accurately: Why was she told to bother? Because the production of blatant lies is useful."

Saturday, September 15, 2018

Russia is better at propaganda than we are; CNN, September 14, 2018

James Ball, CNN; Russia is better at propaganda than we are

"It's a playbook NATO is so aware of that it's produced a handbook setting out the Russian model -- "dismiss, distort, distract, dismay" -- in detail.

It's a playbook that prospers by using the tools of a democracy -- open disagreement, tolerance of fringe groups and crucially mainstream and social media -- against us. And there is so far no sign that its efficacy is diminishing.

Simply put, Russia is better at misinformation than its opponents. It understands better how information -- good or bad -- is spread. Everyone else needs to get better at dealing with it."

Forbidden love: the original Dorian Gray revealed, direct from Oscar Wilde’s pen; The Guardian, September 8, 2018

Donna Ferguson, The Guardian; Forbidden love: the original Dorian Gray revealed, direct from Oscar Wilde’s pen

"It is the first time the original manuscript in Wilde’s own writing has been published and demonstrates how he self-censored some of the most romantic paragraphs. He tones down the more overt references to the homoerotic nature of Basil Hallward’s relationship with Dorian, crossing out his confession that “the world becomes young to me when I hold his hand”.

Yet the manuscript also includes passages – later removed from the novel we know today – that show how Wilde wanted to shock his Victorian readers by openly writing about homosexual feelings. For example, this declaration of love by Basil for Dorian on page 147: “It is quite true that I have worshipped you with far more romance than a man should ever give to a friend. Somehow I have never loved a woman… I quite admit that I adored you madly, extravagantly, absurdly.”"

Scientific publishing is a rip-off. We fund the research – it should be free; The Guardian, September 13, 2018

George Monbiot, The Guardian; Scientific publishing is a rip-off. We fund the research – it should be free

"Never underestimate the power of one determined person. What Carole Cadwalladr has done to Facebook and big data, and Edward Snowden has done to the state security complex, the young Kazakhstani scientist Alexandra Elbakyan has done to the multibillion-dollar industry that traps knowledge behind paywalls. Sci-Hub, her pirate web scraper service, has done more than any government to tackle one of the biggest rip-offs of the modern era: the capture of publicly funded research that should belong to us all. Everyone should be free to learn; knowledge should be disseminated as widely as possible. No one would publicly disagree with these sentiments. Yet governments and universities have allowed the big academic publishers to deny these rights. Academic publishing might sound like an obscure and fusty affair, but it uses one of the most ruthless and profitable business models of any industry."

Please, students, take that ‘impractical’ humanities course. We will all benefit.; The Washington Post, September 14, 2018

Ronald J. Daniels, The Washington Post; Please, students, take that ‘impractical’ humanities course. We will all benefit.

"Ronald J. Daniels is the president of Johns Hopkins University. This op-ed is adapted from a letter to Hopkins students...

I would have also mentioned to the student who shunned the philosophy course that he was misinformed about the job market. It is true that many employers are looking for graduates with specialized technical skills, but they also look for other capabilities. As the world is transformed by artificial intelligence, machine learning and automation, the uniquely human qualities of creativity, imagination, discernment and moral reasoning will be the ultimate coin of the realm. All these skills, as well as the ability to communicate clearly and persuasively, are honed in humanities courses."

How can we better serve LGBTQ journalists?; The Poynter Institute, September 14, 2018

Daniel Funke, The Poynter Institute; How can we better serve LGBTQ journalists?


"Striving to dismantle otherness in order to come up with solutions to journalism’s biggest problems is a constant thread at NLGJA. This year, one of the event’s main panels was made up almost entirely of people of color. There were sessions on how to cover the transgender community, telling stories about bisexuals and diversity and intersection. Some of the issues journalists highlighted include:

  • Deadnaming transgender people in obituaries.
  • A lack of sensitivity from newsroom leaders about stories that could potentially be triggering for reporters of diversity.
  • Missing out on important local stories about the LGBTQ community because of national political coverage.
  • A lack of support for journalists who experience trauma on assignment.
  • Covering stories about transgender people that don’t involve death or hardship."

Thursday, September 13, 2018

NIPS | 2018: Thirty-second Conference on Neural Information Processing Systems; NIPS Code of Conduct

NIPS | 2018: Thirty-second Conference on Neural Information Processing Systems

[Kip Currier: Listening to a Getting Smart podcast, "AI4All Extends The Power of Artificial Intelligence to High School Girls", led me to Neural Information Processing Systems (NIPS) and their NIPS Code of Conduct, which I've copied below] 

"NIPS Code of Conduct

The open exchange of ideas, the freedom of thought and expression, and respectful scientific debate are central to the goals of this conference on machine learning; this requires a community and an environment that recognizes and respects the inherent worth of every person.

Who? All participants---attendees, organizers, reviewers, speakers, sponsors, and volunteers at our conference, workshops, and conference-sponsored social events---are required to agree with this code of conduct both during the event and on official communication channels, including social media. Organizers will enforce this code, and we expect cooperation from all participants to help ensure a safe and productive environment for everybody.

Scope? The conference commits itself to providing an experience for all participants that is free from harassment, bullying, discrimination, and retaliation for all participants. This includes offensive comments related to gender, gender identity and expression, age, sexual orientation, disability, physical appearance, body size, race, ethnicity, religion (or lack thereof), politics, technology choices, or any other personal characteristics. Bullying, intimidation, personal attacks, harassment, sustained disruption of talks or other events, and behavior that interferes with another's full participation will not be tolerated. This includes sexual harassment, stalking, following, harassing photography or recording, inappropriate physical contact, unwelcome sexual attention, public vulgar exchanges, and diminutive characterizations, which are all unwelcome in this community.

Sponsors are equally subject to this Code of Conduct. In particular, sponsors should not use images, activities, or other materials that are of a sexual, racial, or otherwise offensive nature. Booth staff (including volunteers) should not use sexualized clothing/uniforms/costumes, or otherwise create a sexualized environment. This code applies both to official sponsors as well as any organization that uses the conference name as branding as part of its activities at or around the conference.

Outcomes? Participants asked by any member of the community to stop any such behavior are expected to comply immediately. If a participant engages in such behavior, the conference organizers may take any action they deem appropriate, including: a formal or informal warning to the offender, expulsion from the conference with no refund, barring from participation in future conferences or their organization, reporting the incident to the offender’s local institution or funding agencies, or reporting the incident to local law enforcement. A response of "just joking" will not be accepted; behavior can be harassing without an intent to offend. If action is taken, an appeals process will be made available.

Reporting? If you have concerns related to your inclusion at that conference, or observe someone else's difficulties, or have any other concerns related to inclusion, please contact the Diversity and Inclusion co-chairs. The Diversity and Inclusion co-chairs can be reached by email at diversity-chairs@lists.nips.cc, on Twitter at @InclusionInML or by telephone/wechat at a number to be announced shortly; conference volunteers will also have this contact information and can assist with connecting you to the co-chairs. Complaints and violations will be handled at the discretion of the Diversity & Inclusion co-chairs, general chair and the conference board. Reports made during the conference will be responded to in less than 24 hours; those at other times in less than two weeks. We are prepared and eager to help participants contact relevant help services, to escort them to a safe location, or to otherwise assist those experiencing harassment to feel safe for the duration of the conference. We gratefully accept feedback from the community on policy and actions; please contact us."

AI4All Extends The Power of Artificial Intelligence to High School Girls; Getting Smart, March 1, 2018

Getting Smart Staff, Getting Smart; AI4All Extends The Power of Artificial Intelligence to High School Girls

"In 2015, Stanford’s Fei-Fei Li, Olga Russakovsky, and Rick Sommer started a summer camp to address the diversity crisis. The early programs, focused on high school girls, had incredible results including increased technical ability, connections to role models, and a sense of belonging in computer science and AI for participants.

A new nonprofit, AI4ALL, was formed last year to extend access to summer programs like the one launched by Li, Russakovsky, and Sommer–beginning with Stanford, Carnegie Mellon University, Berkeley, Princeton, Boston University, and Simon Fraser.

The mission of AI4ALL, according to Posner, is to increase diversity and inclusion in the field and to make sure the benefits are widely shared by democratizing access to tools and involveing [sic] diverse voices in the field."

Elon Musk's secretive LA private school doesn't just teach spelling and math — it also asks students ethics and critical thinking puzzles you usually don't see elsewhere; Business Insider, September 3, 2018

Andy Kiersz, Business Insider; Elon Musk's secretive LA private school doesn't just teach spelling and math — it also asks students ethics and critical thinking puzzles you usually don't see elsewhere

"Elon Musk's secretive LA private school, Ad Astra, has developed a new tool for teachers and students.

Educational software developer ClassDojo is partnering up with Ad Astra to develop a set of critical thinking puzzles called "Conundrums" that they'll release to teachers and students this fall. The tools are meant to offer the type of critical thinking espoused by Ad Astra, although Ad Astra is not using this specific tool.

The Conundrums pose open-ended critical thinking or ethical problems for the students, who are then encouraged to discuss the issues among themselves and reason out a solution. They tend to pose somewhat more nuanced and complicated questions than most elementary or middle school curricula address."

Let's Talk About AI Ethics; We're On A Deadline; Forbes, September 13, 2018

Tom Vander Ark, Forbes; Let's Talk About AI Ethics; We're On A Deadline

"In Pittsburgh, the Montour School District launched America's First Public School AI Program.

Justin Aglio, director of academic achievement and innovation, is working with MIT on an open source middle school AI Ethics Curriculum that will develop students’ ethical thinking abilities in the domain of artificial intelligence. In addition to learning computer science fundamentals, students will also learn how professions such as designers, social scientists, or philosophers contribute to the ethical design of AI systems.” 

AI4ALL is creating a national network of university computer science departments connecting with high school students.

Why secondary schools as the hub of community conversations? Every secondary school student should be studying the implications of AI--it’s the most important change force that will shape their careers, social networks, and communities. And what better way to learn than to host conversations that explore what’s going on, what it means, and how to prepare (see a SXSWedu conversation using this framework)?

It’s time to #AskAboutAI. It’s time for secondary schools to become the hub of community conversations about the ethics and opportunities of our time. We’re on a deadline."  

The UK can and must be a world leader in ethical regulation of the digital revolution; ComputerWeekly.com, September 13, 2018

Bryan Glick, ComputerWeekly.com; The UK can and must be a world leader in ethical regulation of the digital revolution

"Nigel Shadbolt, one of the UK’s leading academics in AI and open data, told Computer Weekly that if the UK wants to take a lead in AI, then an area for focus is ethics. Realistically, the UK can’t compete with the multibillions that China is throwing at the sector – but China’s social and political culture is unlikely to take the same approach to regulation and ethics as we would.

It’s an easy thing to say, much harder to do – but the UK has a unique opportunity to lead the world in ethical regulation of the digital revolution. Don’t regulate on specifics – regulate on values and principles that can underpin technology development for years, maybe even decades to come.

The UK government is already setting up a Centre for Data Ethics and Innovation, and Theresa May has called for the UK to be a world leader in ethical AI. We have a genuine opportunity to set the standards that the world will follow. In such uncertain times for the UK tech sector, ethics is one area where we can and must take the lead."

Trump Rated Worse Than Other Modern-Day Presidents on Ethics; Gallup, September 13, 2018

Megan Brenan, Gallup; Trump Rated Worse Than Other Modern-Day Presidents on Ethics

"Bottom Line

The American public's ratings of the ethical standards of Trump and his administration's top officials are generally much worse than their ratings of his predecessors. Trump is viewed as having lower ethical standards than all presidents since Nixon, who resigned when faced with imminent impeachment."

North Carolina, Warned of Rising Seas, Chose to Favor Development; The New York Times, September 12, 2018

John Schwartz and Richard Fausset, The New York Times; North Carolina, Warned of Rising Seas, Chose to Favor Development

[Kip Currier: Food for thought for all stakeholders (--particularly anyone, anywhere, concerned and involved with matters of scientific research, data, modeling, ethics, law, and policy--) as the Carolinas prepare for the arrival of Hurricane Florence.

The article's takeaway insight is in the last three sentences, excerpted and highlighted in bold below.]

"The leading scientific model used to forecast storm surge and its effect on coastal areas, known as Adcirc, was created in large part by Rick Luettich, director of the institute of marine sciences at the University of North Carolina.

In a telephone interview during a break from boarding up the windows of his home in Morehead City, on the coast, Mr. Luettich noted that before 2012, the state pursued progressive policies that put it in the forefront of coastal management. When the legislature pushed back against the clear scientific evidence underlying climate change, he said, “it came as a shock.”

There is a lesson in that, he said.

[Bold and red added for emphasis] “The process of converting scientific research into policy is one that we take for granted at times,” Mr. Luettich said. “What we learned is that you can’t take that for granted. We need to have a closer dialogue with policymakers, to make sure we’re on the same page.”

Wednesday, September 12, 2018

EU approves controversial Copyright Directive, including internet ‘link tax’ and ‘upload filter’; The Verge, September 12, 2018

James Vincent, The Verge; EU approves controversial Copyright Directive, including internet ‘link tax’ and ‘upload filter’


"The European Parliament has voted in favor of the Copyright Directive, a controversial piece of legislation intended to update online copyright laws for the internet age.

The directive was originally rejected by MEPs in July following criticism of two key provisions: Articles 11 and 13, dubbed the “link tax” and “upload filter” by critics. However, in parliament this morning, an updated version of the directive was approved, along with amended versions of Articles 11 and 13. The final vote was 438 in favor and 226 against.

The fallout from this decision will be far-reaching, and take a long time to settle. The directive itself still faces a final vote in January 2019 (although experts say it’s unlikely it will be rejected). After that it will need to be implemented by individual EU member states, who could very well vary significantly in how they choose to interpret the directive’s text."

The EU copyright law that artists love—and internet pioneers say would destroy the web; Quartz, September 11, 2018

Ephrat Livni, Quartz; The EU copyright law that artists love—and internet pioneers say would destroy the web

"European internet users are up in arms over proposed changes to copyright law that will either make the web more fair and lucrative for content creators or destroy the web as we know it—depending on whom you ask.

The movement to modernize and unify EU intellectual property law, initiated in 2016, is up for a vote in the European Parliament in Brussels Sept. 12

Two controversial sections—Article 13 and Article 11—would force technology platforms to police digital content by automatically evaluating intellectual property before anything is uploaded and make news aggregators pay to license links to posts. This would ensure that musicians, artists, filmmakers, photographers and media outlets are paid for work that currently drives advertising revenue to technology companies like Google and Facebook for content that they don’t pay for, or say so supporters. Opponents argue that it will transform the web from a free and open platform to a tool to police information and limit ideas."

Tuesday, September 11, 2018

You Discovered Your Genetic History. Is It Worth the Privacy Risk?; Fortune, September 10, 2018

Monica Rodriguez, Fortune; You Discovered Your Genetic History. Is It Worth the Privacy Risk?

"Direct-to-consumer genetic testing companies like 23andMe must win FDA approval to send individuals medical risk findings, while companies that involve physicians in the process do not. But unlike healthcare providers, direct-to-consumer genetic testing companies are not bound by HIPPA, the law that protects the privacy of personal medical information, and there are few laws in place to regulate the privacy of genetic information obtained by these companies.

“One of the big distinctions between medical research and data in Silicon Valley is the ethical framework that requires informed consent,” said Charles Seife, a professor of journalism at New York University who writes extensively on the genetic testing industry. “It is a difference of making sure that [privacy] rights are being preserved.”"

Open Access at the Movies; Inside Higher Ed, September 10, 2018

Lindsay McKenzie, Inside Higher Ed; Open Access at the Movies

"[Jason] Schmitt's film raises some important questions -- how is it possible that big for-profit publishers, such as Elsevier, have fatter profit margins than some of the biggest corporations in the world? Why can't everyone read all publicly funded research for free?

Discussion of these questions in the film is undoubtedly one-sided. Of around 70 people featured in the film, just a handful work for for-profit publishers like Springer-Nature or the American Association for the Advancement of Science -- and they don't get much screen time. There is also no representative from Elsevier, despite the publisher being the focus of much criticism in the film. This was not for lack of trying, said Schmitt. “I offered Elsevier a five-minute section of the film that they could have full creative control over,” he said. “They turned me down.”

Schmitt said he made Paywall not for academics and scholars but for the general public. He wants people to understand how scholarly publishing works, and why they should care that they can’t access research paid for with their tax dollars."

[Documentary] Paywall: The Business of Scholarship, 2018

[Documentary] Paywall: The Business of Scholarship

"Paywall: The Business of Scholarship is a documentary which focuses on the need for open access to research and science, questions the rationale behind the $25.2 billion a year that flows into for-profit academic publishers, examines the 35-40% profit margin associated with the top academic publisher Elsevier and looks at how that profit margin is often greater than some of the most profitable tech companies like Apple, Facebook and Google.  

Staying true to the open access model: it is free to stream and download, for private or public use, and maintains the most open CC BY 4.0 Creative Commons designation to ensure anyone regardless of their social, financial or political background will have access.   

If you are interested in screening this film at your university, please fill out our contact form."

Thursday, September 6, 2018

Why Facebook Will Never Be Free of Fakes; The New York Times, September 5, 2018

Siva Vaidhyanathan, The New York Times; Why Facebook Will Never Be Free of Fakes

"Facebook has put impressive effort into reforming itself around the margins. But considering the harm that Facebook has caused — sharing user data with unauthorized third parties, spreading propaganda that sets off ethnic violence, hosting attacks on elections around the world — exterminating most of the pests is not good enough. Stopping all of them is impossible. Facebook is too big to govern and too big to fix. We might just have to accept that.

Siva Vaidhyanathan is a professor of media studies at the University of Virginia and the author of “Antisocial Media: How Facebook Disconnects Us and Undermines Democracy.”"

I Am Part of the Resistance Inside the Trump Administration; The New York Times, September 5, 2018

Anonymous, The New York Times;

I Am Part of the Resistance Inside the Trump Administration

 

[Kip Currier: The New York Times' controversial decision to publish on September 5, 2018 an anonymous Op-Ed essay penned by a "senior official" within the Trump administration, triggers a rash of thought-stirring, thorny ethical questions:

Chief among them: What were the considerations in deciding whether to permit the senior official to publish the Op-Ed essay anonymously? (--And did The Times and "Anonymous" discuss revealing the individual's identity at some point in the future?)]


Who were the stakeholders that were and were not considered by The Times' editorial staff in making this decision?


What roles are The Times' editorial staff assuming in publishing this piece by Anonymous?


Has The Times communicated sufficient transparency about its editorial decision-making?]

"The Times today is taking the rare step of publishing an anonymous Op-Ed essay. We have done so at the request of the author, a senior official in the Trump administration whose identity is known to us and whose job would be jeopardized by its disclosure. We believe publishing this essay anonymously is the only way to deliver an important perspective to our readers. We invite you to submit a question about the essay or our vetting process here...."

[Excerpt]

"The root of the problem is the president’s amorality. Anyone who works with him knows he is not moored to any discernible first principles that guide his decision making."

From Mountain of CCTV Footage, Pay Dirt: 2 Russians Are Named in Spy Poisoning; The New York Times, September 5, 2018

Ellen Barry, The New York Times;

From Mountain of CCTV Footage, Pay Dirt: 2 Russians Are Named in Spy Poisoning


[Kip Currier: Fascinating example of good old-fashioned, "methodical, plodding" detective work, combined with 21st century technologies of mass surveillance and facial recognition by machines and gifted humans.

As I think about the chapters on privacy and surveillance in the ethics textbook I'm writing, this story is a good reminder of the socially-positive aspects of new technologies, amid often legitimate concerns about their demonstrated and potential downsides. In the vein of prior stories I've posted on this blog about the use, for example, of drones for animal conservation and monitoring efforts, the identification of the two Russian operatives in the Salisbury, UK poisoning case highlights how the uses and applications of digital age technologies like mass surveillance frequently fall outside the lines of "all bad" or "all good".]

"“It’s almost impossible in this country to hide, almost impossible,” said John Bayliss, who retired from the Government Communications Headquarters, Britain’s electronic intelligence agency, in 2010. “And with the new software they have, you can tell the person by the way they walk, or a ring they wear, or a watch they wear. It becomes even harder.”

The investigation into the Skripal poisoning, known as Operation Wedana, will stand as a high-profile test of an investigative technique Britain has pioneered: accumulating mounds of visual data and sifting through it...

Ceri Hurford-Jones, the managing director of Salisbury’s local radio station, saluted investigators for their “sheer skill in getting a grip on this, and finding out who these people were.”

It may not have been the stuff of action films, but Mr. Hurford-Jones did see something impressive about the whole thing.

“It’s methodical, plodding,” he said. “But, you know, that’s the only way you can do these things. There is a bit of Englishness in it.”"

Wednesday, September 5, 2018

Computer Programmers Get New Tech Ethics Code; The Conversation via Scientific American, August 11, 2018

Cherri M. Pancake, The Conversation via Scientific American; Computer Programmers Get New Tech Ethics Code: The guidelines come from the Association for Computing Machinery

"That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions...

ACM’s last code of ethics was adopted in 1992, when many people saw computing work as purely technical. The internet was in its infancy and people were just beginning to understand the value of being able to aggregate and distribute information widely. It would still be years before artificial intelligence and machine learning had applications outside research labs.

Today, technologists’ work can affect the lives and livelihoods of people in ways that may be unintended, even unpredictable. I’m not an ethicist by training, but it’s clear to me that anyone in today’s computing field can benefit from guidance on ethical thinking and behavior."

This Music Theory Professor Just Showed How Stupid and Broken Copyright Filters Are; Motherboard, August 30, 2018

Karl Bode, Motherboard; This Music Theory Professor Just Showed How Stupid and Broken Copyright Filters Are

"German music professor Ulrich Kaiser this week wrote about a troubling experiment he ran on YouTube. As a music theory teacher, Kaiser routinely works to catalog a collection of public domain recordings he maintains online in order to teach his students about Beethoven and other classical music composers."