Wednesday, January 30, 2019

Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’; The New York Times, January 29, 2019

Thomas L. Friedman, The New York Times; Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’


[Kip Currier: I just posted this Thomas L. Friedman New York Times piece as a MUST read for the students in my Information Ethics course. The "money quote" regarding the crux of the issue: 

"Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses."

And the call to action for all those who care and can do something, even if it is solely to raise awareness of the promise AND perils of these "deep" technologies:


"This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust."


Friedman leaves out another important "opening"--EDUCATION--and a critically important stakeholder group that is uniquely positioned and which must be ready and able to help prepare citizenry to critically evaluate "deep" technologies and information of all kinds--EDUCATORS.]
 

"Well, even though it’s early, I’m ready to declare the word of the year for 2019.

The word is “deep.”

Why? Because recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before. I’m talking about deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.

Some of these technologies offer unprecedented promise and some unprecedented peril — but they’re all now part of our lives. Everything is going deep...

Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses...

This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.

But deep trust and deep loyalty cannot be forged overnight. They take time. That’s one reason this old newspaper I work for — the Gray Lady — is doing so well today. Not all, but many people, are desperate for trusted navigators.

Many will also look for that attribute in our next president, because they sense that deep changes are afoot. It is unsettling, and yet, there’s no swimming back. We are, indeed, far from the shallow now."


Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones; The New York Times, January 29, 2019

Nicole Perlroth, The New York Times; Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones


"A bug this easy to exploit is every company’s worst security nightmare and every spy agency, cybercriminal and stalker’s dream. In emails to Apple’s product security team, Ms. Thompson noted that she and her son were just everyday citizens who believed they had uncovered a flaw that could undermine national security." 

“My fear is that this flaw could be used for nefarious purposes,” she wrote in a letter provided to The New York Times. “Although this certainly raises privacy and security issues for private individuals, there is the potential that this could impact national security if, for example, government members were to fall victim to this eavesdropping flaw."

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

New definition of privacy needed for the social media age; The San Francisco Chronicle, January 28, 2019

Jordan Cunningham, The San Francisco Chronicle; New definition of privacy needed for the social media age

"To bring about meaningful change, we need to fundamentally overhaul the way we define privacy in the social media age.

We need to stop looking at consumers’ data as a commodity and start seeing it as private information that belongs to individuals. We need to look at the impact of technology on young kids with developing brains. And we need to give consumers an easy way to ensure their privacy in homes filled with connected devices.

That’s why I’ve worked with a group of state lawmakers to create the “Your Data, Your Way” package of legislation."

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Meet the data guardians taking on the tech giants; BBC, January 29, 2019

Matthew Wall, BBC; Meet the data guardians taking on the tech giants

"Ever since the world wide web went public in 1993, we have traded our personal data in return for free services from the tech giants. Now a growing number of start-ups think it's about time we took control of our own data and even started making money from it. But do we care enough to bother?"

Big tech firms still don’t care about your privacy; The Washington Post, January 28, 2019

Rob Pegoraro, The Washington Post; Big tech firms still don’t care about your privacy

"Today is Data Privacy Day. Please clap.

This is an actual holiday of sorts, recognized as such in 2007 by the Council of Europe to mark the anniversary of the 1981 opening of Europe’s Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data — the grandfather of such strict European privacy rules as the General Data Protection Regulation.

In the United States, Data Privacy Day has yet to win more official acknowledgment than a few congressional resolutions. It mainly serves as an opportunity for tech companies to publish blog posts about their commitment to helping customers understand their privacy choices.

But in a parallel universe, today might feature different headlines. Consider the following possibilities."

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain; BoingBoing, January 27, 2019

Cory Doctorow, BoingBoing; Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain

"On Friday, hundreds of us gathered at the Internet Archive, at the invitation of Creative Commons, to celebrate the Grand Re-Opening of the Public Domain, just weeks after the first works entered the American public domain in twenty years.
 

I had the honor of delivering the closing keynote, after a roster of astounding speakers. It was a big challenge and I was pretty nervous, but on reviewing the saved livestream, I'm pretty proud of how it turned out.

Proud enough that I've ripped the audio and posted it to my podcast feed; the video for the keynote is on the Archive and mirrored to Youtube.

The whole event's livestream is also online, and boy do I recommend it."

Monday, January 28, 2019

Ethics as Conversation: A Process for Progress; MIT Sloan Management Review, January 28, 2019

R. Edward Freeman and Bidhan (Bobby). L. Parmar, MIT Sloan Management Review; Ethics as Conversation: A Process for Progress

"We began to use this insight in our conversations with executives and students. We ask them to define what we call “your ethics framework.” Practically, this means defining what set of questions you want to be sure you ask when confronted with a decision or issue that has ethical implications.

The point of asking these questions is partly to anticipate how others might evaluate and interpret your choices and therefore to take those criteria into account as you devise a plan. The questions also help leaders formulate the problem or opportunity in a more nuanced way, which leads to more effective action. You are less likely to be blindsided by negative reactions if you have fully considered a problem.

The exact questions to pose may differ by company, depending on its purpose, its business model, or its more fundamental values. Nonetheless, we suggest seven basic queries that leaders should use to make better decisions on tough issues."

Embedding ethics in computer science curriculum: Harvard initiative seen as a national model; Harvard, John A. Paulson School of Engineering and Applied Sciences, January 28, 2019

Paul Karoff, Harvard, John A. Paulson School of Engineering and Applied Sciences; Embedding ethics in computer science curriculum:
Harvard initiative seen as a national model

"Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”
 
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”"

Sunday, January 27, 2019

Andrew Gillum’s Florida Ethics Troubles Just Got Worse; Slate, January 25, 2019

Mark Joseph Stern, Slate; Andrew Gillum’s Florida Ethics Troubles Just Got Worse

"However Gillum chooses to proceed, it’s clear that Friday’s findings undermine his account and, by extension, his credibility. Throughout the campaign, he insisted that he paid his share of the lavish excursions and never accepted gifts from lobbyists. That narrative is now almost impossible to believe. True, Gillum never performed favors for lobbyists in exchange for their largesse, which would be a federal offense. But even without a quid pro quo, his cozy relationship with lobbyists did not seem to comport with Florida law.

Should Gillum run for office down the road, this blunder will likely be used as a cudgel, risking his ability to win a primary, let alone a general election. Perhaps it is too soon to write off his political career. But if he ever again throws his hat in the ring, his opponents will be ready to pounce with a sordid—and substantiated—tale of corruption."

Five myths about conspiracy theories; The Washington Post, January 17, 2019

Rob Brotherton; Rob Brotherton is an academic psychologist at Barnard College and author of “Suspicious Minds: Why We Believe Conspiracy Theories.”, The Washington Post; Five myths about conspiracy theories

"Conspiracy theories have always been around, but lately, they’ve been getting more attention. As the prevalence of conspiracy thinking among the electorate and even within the highest offices of government has become clear, conspiracism has inspired popular think-pieces and attracted scholars. Along the way, conspiracy theories have also inspired plenty of myths. Here are five."

Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019

Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?

"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...

Too often, we think only about increasing our competitiveness in terms of advancing the technology. But the effort can’t just be about making AI more powerful. It must also be about making sure AI has the right impact. AI’s greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare in which AI rapidly replaces human beings at many jobs and tasks. If we want to realize AI’s incredible potential, we must also advance AI in a way that increases the public’s confidence that AI benefits society. We must have a framework for addressing the impacts and the ethics.

What does an ethics-driven approach to AI look like?

It means asking not only whether AI be can used in certain circumstances, but should it?

Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.

As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."

 

Friday, January 25, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values; The New Yorker, January 24, 2019

Caroline Lester, The New Yorker; A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

"The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The decisions made by Germany will reverberate beyond its borders. Volkswagen sells more automobiles than any other company in the world. But that manufacturing power comes with a complicated moral responsibility. What should a company do if another country wants its vehicles to reflect different moral calculations? Should a Western car de-prioritize the young in an Eastern country? Shariff leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism. But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior."

Thursday, January 24, 2019

Drones unleashed against invasive rats in the Galápagos; Nature, January 24, 2019

Emma Marris, Nature; Drones unleashed against invasive rats in the Galápagos

"One advantage of using drones, Morley says, is that it reduces the need to cut trails through a forest to lay poison baits or traps. He is still working on ways to use the drones to monitor whether projects are successful, playing with acoustic, optical or other sensors that the drones could drop near the poison.

Using drones to kill could also change how conservation scientists view such work, Morley says, comparing the approach to modern warfare. “You used to be able to see your opponent. Now, you just a press a button and you fire a missile,” he says. “You become a little bit detached from the reality that you have killed something or somebody over there.”

That emotional distance could be seen as a benefit of the technology, or as a problem, says Chelsea Batavia, a scholar of conservation ethics at Oregon State University in Corvallis. She feels that people who kill animals for conservation should allow themselves to feel the moral weight of their actions, and even grieve. “Have a conversation about what you are doing and talk through that as a group,” she advises. “Let the impact of what you are doing hit you.”"

Drone Scare Near New York City Shows Hazard Posed to Air Travel; The New York Times, January 23, 2019

Patrick McGeehan and Cade Metz, The New York Times; Drone Scare Near New York City Shows Hazard Posed to Air Travel

"The disruption was all the more alarming because it came just one month after reported drone sightings caused the shutdown of Gatwick Airport in London, one of the busiest in Europe.

The upheaval at Newark illustrated how vulnerable the air-travel system is to the proliferation of inexpensive drones that can weigh as much as 50 pounds and are capable of flying high and fast enough to get in the path of commercial jets, experts on aviation safety and drone technology said. It also raised questions about whether airports are prepared enough to identify drones and prevent them from paralyzing travel and leaving passengers stranded.

“This is a really disturbing trend,” said John Halinski, former deputy administrator of the federal Transportation Security Administration. “It is a real problem because drones are multiplying every day. They really pose a threat in a number of ways to civil aviation.”"

I Found $90 in the Subway. Is It Yours?; The New York Times, January 24, 2019

Niraj Chokshi, The New York Times; I Found $90 in the Subway. Is It Yours?

"As I got off a train in Manhattan on Wednesday, I paid little attention to a flutter out of the corner of my eye on the subway. Then another passenger told me that I had dropped some money.

“That isn’t mine,” I told her as I glanced at what turned out to be $90 on the ground.

I realized the flutter had been the money falling out of the coat of a man standing near me who had just stepped off the train.

The doors were about to close, and no one was acting, so I grabbed the cash and left the train. But I was too late. The man had disappeared into the crowd. I waited a few minutes to see if he would return, but he was long gone. I tried to find a transit employee or police officer, but none were in sight.

I was running late, so I left. But now what? What are you supposed to do with money that isn’t yours?"

This Time It’s Russia’s Emails Getting Leaked; The Daily Beast, January 24, 2019

Kevin Poulsen, The Daily Beast; This Time It’s Russia’s Emails Getting Leaked

"Russian oligarchs and Kremlin apparatchiks may find the tables turned on them later this week when a new leak site unleashes a compilation of hundreds of thousands of hacked emails and gigabytes of leaked documents. Think of it as WikiLeaks, but without Julian Assange’s aversion for posting Russian secrets.

The site, Distributed Denial of Secrets, was founded last month by transparency activists. Co-founder Emma Best said the Russian leaks, slated for release on Friday, will bring into one place dozens of different archives of hacked material that at best has been difficult to locate, and in some cases appears to have disappeared entirely from the web...

Distributed Denial of Secrets, or DDoS, is a volunteer effort that launched last month. Its objective is to provide researchers and journalists with a central repository where they can find the terabytes of hacked and leaked documents that are appearing on the internet with growing regularity. The site is a kind of academic library or a museum for leak scholars, housing such diverse artifacts as the files North Korea stole from Sony in 2014, and a leak from the Special State Protection Service of Azerbaijan."

Trapped in a hoax: survivors of conspiracy theories speak out; The Guardian, January 24, 2019

, The Guardian; Trapped in a hoax: survivors of conspiracy theories speak out

"Conspiracy theories used to be seen as bizarre expressions of harmless eccentrics. Not any more. Gone are the days of outlandish theories about Roswell’s UFOs, the “hoax” moon landings or grassy knolls. Instead, today’s iterations have morphed into political weapons. Turbocharged by social media, they spread with astonishing speed, using death threats as currency.

Together with their first cousins, fake news, they are challenging society’s trust in facts. At its most toxic, this contagion poses a profound threat to democracy by damaging its bedrock: a shared commitment to truth...

Amid this explosive growth, one aspect has been under-appreciated: the human cost. What is the toll paid by those caught up in these falsehoods? And how are they fighting back?"

Tuesday, January 22, 2019

If Mark Zuckerberg Wants to Talk, Britain Is Waiting: Facebook leadership has a history of lashing out instead of opening up; The New York Times, January 22, 2019

Damian Collins, The New York Times; If Mark Zuckerberg Wants to Talk, Britain Is Waiting:

"Mr. Collins is a member of the British Parliament....

So much of our lives is organized through social media, and many people use social media platforms as the main source of information about the world around them. We cannot allow this public space to become a complete wild West, with little or no protection for the citizen user. The rights and responsibilities that we enjoy in the real world need to exist and be protected online as well."

The AI Arms Race Means We Need AI Ethics; Forbes, January 22, 2019

Kasia Borowska, Forbes; The AI Arms Race Means We Need AI Ethics

"In an AI world, the currency is data. Consumers and citizens trade data for convenience and cheaper services. The likes of Facebook, Google, Amazon, Netflix and others process this data to make decisions that influence likes, the adverts we see, purchasing decisions or even who we vote for. There are questions to ask on the implications of everything we access, view or read being controlled by a few global elite. There are also major implications if small companies or emerging markets are unable to compete from being priced out of the data pool. This is why access to AI is so important: not only does it enable more positives from AI to come to the fore, but it also helps to prevent monopolies forming. Despite industry-led efforts, there are no internationally agreed ethical rules to regulate the AI market."

Leading privacy scholar to speak in Dublin; Law Society Gazette Ireland, January 21, 2019

Law Society Gazette Ireland; Leading privacy scholar to speak in Dublin

"“Navigating Privacy in a Data Centric World” is the topic for a talk at Regent House in Trinity College later this month.

On Monday, 28 January at 4pm Jules Polonetsky of the Future of Privacy Forum (FPF) will give a public lecture on how almost every area of technical progress today is reliant on ever broader access to personal information.

Dr Jules Polonetsky is known for his book ‘A Theory of Creepy: Technology, Privacy and Shifting Social Norms’.

He believes that the rapid evolution of digital technologies has thrown up social and ethical dilemmas that we have hardly begun to understand.

Companies, academic researchers, governments and philanthropists utilise ever more sensitive data about individuals’ movements, health, online browsing, home activity and social interactions."

There’s hope for federal online privacy legislation; The Washington Post, January 21, 2019

Editorial Board, The Washington Post; There’s hope for federal online privacy legislation

"A FEW months ago, every day seemed to bring with it a new technology scandal. Now, each day seems to bring a new policy proposal to fix the problem. Even the companies support action from Congress. The gaps between the proposed solutions so far offer some insight into areas of agreement — and more important, disagreement — that will define the fight to come."

Dutch surgeon wins landmark 'right to be forgotten' case; The Guardian, January 21, 2019

Daniel Boffey, The Guardian; Dutch surgeon wins landmark 'right to be forgotten' case

"A Dutch surgeon formally disciplined for her medical negligence has won a legal action to remove Google search results about her case in a landmark “right to be forgotten” ruling...

Google and the Dutch data privacy watchdog, Autoriteit Persoonsgegevens, initially rejected attempts to have the links removed on the basis that the doctor was still on probation and the information remained relevant.

However, in what is said to be the first right to be forgotten case involving medical negligence by a doctor, the district court of Amsterdam subsequently ruled the surgeon had “an interest in not indicating that every time someone enters their full name in Google’s search engine, (almost) immediately the mention of her name appears on the ‘blacklist of doctors’, and this importance adds more weight than the public’s interest in finding this information in this way”...

The European court of justice established the “right to be forgotten” in a 2014 ruling relating to a Spanish citizen’s claim against material about him found on Google searches. It allows European citizens to ask search engines to remove links to “inadequate, irrelevant or … excessive” content. About 3 million people in Europe have since made such a request." 

Monday, January 21, 2019

Trademark Fight Over Vulgar Term’s ‘Phonetic Twin’ Heads to Supreme Court; The New York Times, January 21, 2019

Adam Liptak, The New York Times; Trademark Fight Over Vulgar Term’s ‘Phonetic Twin’ Heads to Supreme Court

"The Supreme Court apparently thinks the question is more complicated, as it agreed this month to hear the government’s appeal. If nothing else, the court can use Mr. Brunetti’s case to sort out just what it meant to say in the 2017 decision, which ruled for an Asian-American dance-rock band called the Slants. (The decision also effectively allowed the Washington Redskins football team to register its trademarks.)

The justices were unanimous in ruling that the prohibition on disparaging trademarks violated the First Amendment. But they managed to split 4 to 4 in most of their reasoning, making it hard to analyze how the decision applies in the context of the ban on scandalous terms."

Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China; The New York Times, January 21, 2019

Austin Ramzy and Sui-Lee Wee, The New York Times; Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China

"Dr. He’s announcement raised ethical concerns about the long-term effects of such genetic alterations, which if successful would be inherited by the child’s progeny, and whether other scientists would be emboldened to try their own gene-editing experiments.

Scientists inside and outside China criticized Dr. He’s work, which highlighted fears that the country has overlooked ethical issues in the pursuit of scientific achievement. The Chinese authorities placed Dr. He under investigation, during which time he has been kept under guard at a guesthouse at the Southern University of Science and Technology in the city of Shenzhen."

Once Centers Of Soviet Propaganda, Moscow's Libraries Are Having A 'Loud' Revival; NPR, January 21, 2019

Lucian Kim, NPR; Once Centers Of Soviet Propaganda, Moscow's Libraries Are Having A 'Loud' Revival

"In recent years, the city's team in charge of libraries has discarded almost all traditional concepts of what a public library is.

"We have a different idea from the way things used to be. A library can be a loud place," says Maria Rogachyova, the official who oversees city libraries. "Of course there should be some quiet nooks where you can focus on your reading, but our libraries also host a huge amount of loud events."...

The library now has its own website, Facebook page and even YouTube channel.

"Moscow libraries aren't competing with modern technology, they're trying to use it," says Rogachyova. "The rise of electronic media shouldn't spell the death of libraries as public spaces.""

The ethics of gene editing: Lulu, Nana, and 'Gattaca; Johns Hopkins University, January 17, 2019


Saralyn Cruickshank, Johns Hopkins University; The ethics of gene editing: Lulu, Nana, and 'Gattaca

"Under the direction of Rebecca Wilbanks, a postdoctoral fellow in the Berman Institute of Bioethics and the Department of the History of Medicine, the students have been immersing themselves during in the language and principles of bioethics and applying what they learn to their understanding of technology, with an emphasis on robotics and reproductive technology in particular.

To help them access such heady material, Wilbanks put a spin on the course format. For the Intersession class—titled Science Fiction and the Ethics of Technology: Sex, Robots, and Doing the Right Thing—students explore course materials through the lens of science fiction.

"We sometimes think future technology might challenge our ethical sensibilities, but science fiction is good at exploring how ethics is connected to a certain way of life that happens to include technology," says Wilbanks, who is writing a book on how science fiction influenced the development of emerging forms of synthetic biology. "As our way of life changes together with technology, so might our ethical norms.""

Sunday, January 20, 2019

Facebook Backs University AI Ethics Institute With $7.5 Million; Forbes, January 20, 2019

Sam Shead, Forbes; Facebook Backs University AI Ethics Institute With $7.5 Million

"Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said...

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph Lütge, who will lead the institute. 


"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction." 

Last year, TUM was ranked 6th in the world for AI research by the Times Higher Education magazine behind universities like Carnegie Mellon University in the USA and Nanyang Technological University in Singapore."

Saturday, January 19, 2019

‘It was getting ugly’: Native American drummer speaks on the MAGA-hat wearing teens who surrounded him; The Washington Post, January 19, 2019

Antonio Olivo Cleve R. Wootson Jr., The Washington Post; ‘It was getting ugly’: Native American drummer speaks on the MAGA-hat wearing teens who surrounded him


"The images in a series of videos that went viral on social media Saturday showed a tense scene near the Lincoln Memorial.

In them, a Native American man steadily beats his drum at the tail end of Friday’s Indigenous Peoples March while singing a song of unity for indigenous people to “be strong” in the face of the ravages of colonialism that now include police brutality, poor access to health care and the ill effects of climate change on reservations.

Surrounding him are a throng of young, mostly white teenage boys, several wearing Make America Great Again caps, with one standing about a foot from the drummer’s face wearing a relentless smirk."

Activists: Chechen Authorities Order Families to Kill LGBT Family Members, Also Pay Ransoms; The Daily Beast, January 18, 2019



"Since 2017, Russian and international LGBT networks have managed to help 150 Chechen victims of violence escape to Western countries. To evacuate one gay person from Chechnya abroad, volunteers have to raise up to 4,000 euros ($4,544)."

Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time; CNBC, January 8, 2019

, CNBC; Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time

"Former employees describe a top-down approach where major decisions are made by the company's leadership, and employees are discouraged from voicing dissent — in direct contradiction to one of Sandberg's mantras, "authentic self."...

"All the things we were preaching, we weren't doing enough of them. We weren't having enough hard conversations. They need to realize that. They need to reflect and ask if they're having hard conversations or just being echo chambers of themselves.""

Why these young tech workers spent their Friday night planning a rebellion against companies like Google, Amazon, and Facebook; Recode, January 18, 2019

, Recode; Why these young tech workers spent their Friday night planning a rebellion against companies like Google, Amazon, and Facebook


"“We’re interested in connecting, bringing together, and organizing the workers in tech to help us fight big tech,” Ross Patton tells the crowd. A software engineer for a pharmaceutical technology startup, he’s an active member of the Tech Workers Coalition, a group dedicated to politically mobilizing employees in the industry to reform from within...

A couple of earnest college students head to the front of the room to talk to the speakers who had just presented, asking them for advice on organizing. One of them, a computer science student at Columbia University, says he has ethical concerns about going into the industry and wanted to learn about how to mobilize.

“If I go into an industry where I’m building things that impact people,” he says, “I want to have a say in what I build.""

Thursday, January 17, 2019

Anil Dash on the biases of tech; The Ezra Klein Show via Vox, January 7, 2019

Ezra Klein, The Ezra Klein Show via Vox; Anil Dash on the biases of tech

[Kip Currier: Excellent podcast discussion of ethics and technology issues by journalist Ezra Klein and Anil Dash, CEO of Glitch and host of the tech podcast Function. 

One particularly thought-provoking, illustrative exchange about the choices humans make in designing and embedding certain values in AI algorithms and the implications of those choices (~5:15 mark):


Ezra Klein: "This feels really important to me because something I'm afraid of, as you move into a world of algorithms, is that algorithms hide the choices we make. That the algorithm says you're not viable for this mortgage. The algorithm says that this Donald Trump tweet should be at the top of everybody's feeds. And when it's the algorithm, that detachment from human beings gives it a kind of authority. It's like some gatekeeper saying this is what you should be looking at..."...

Anil Dash:  "That's right. The algorithm is availing of the fact that it's still the people at that company making the choice. And when YouTube chooses to show disturbing content as "related videos" to my 7-year old son, that is a choice that people at YouTube are making, and people at Google and Alphabet are making. And that when they say "well, the algorithm did it." It's like "well, who made the algorithm?" And you can make it not do that. And I know you could do that because, for example, if it were a copyrighted version of a Beyonce song, you'd instantly stop it from being shared. So the algorithm is a set of choices about values and what you want to invest in. And that is, to that point, technology has values is not neutral."]

"“Marc Andreessen famously said that ‘software is eating the world,’ but it’s far more accurate to say that the neoliberal values of software tycoons are eating the world,” wrote Anil Dash.

Dash’s argument caught my eye. But then, a lot of Dash’s arguments catch my eye. He’s one of the most perceptive interpreters and critics of the tech industry around these days. That’s in part because Dash is part of the world he’s describing: He’s the CEO of Glitch, the host of the excellent tech podcast Function, and a longtime developer and blogger.

In this conversation, Dash and I discuss his excellent list of the 12 things everyone should know about technology. This episode left me with an idea I didn’t have going in: What if the problem with a lot of the social technologies we use — and, lately, lament — isn’t the ethics of their creators or the revenue models they’re built on, but the sheer scale they’ve achieved? What if products like Facebook and Twitter and Google have just gotten too big and too powerful for anyone to truly understand, much less manage?"

Why 1984 Isn't Banned in China: Censorship in the country is more complicated than many Westerners imagine.; The Atlantic, January 13, 2019

Amy Hawkins, Jeffrey Wasserstrom, The Atlantic; Why 1984 Isn't Banned in China:
Censorship in the country is more complicated than many Westerners imagine.

"Western commentators often give the impression that Chinese censorship is more comprehensive than it really is, due, in part, to a veritable obsession with the government’s handling of the so-called three T’s of Taiwan, Tibet, and Tiananmen. A 2013 article in The New York Review of Books states, for example, that “to this day Tiananmen is one of the neuralgic words forbidden—not always successfully—on China’s Internet.” Any book, article, or social-media post that so much as mentions these words, the conventional wisdom holds, is liable to disappear.

Even when it comes to the “three T’s,” though, things are a bit less simple than they appear."

Why Won’t John Roberts Accept an Ethics Code for Supreme Court Justices?; Slate, January 16, 2019

, Slate; Why Won’t John Roberts Accept an Ethics Code for Supreme Court Justices? 

"Supreme Court justices also face ethics questions. Is it permissible for justices to provide anonymous leaks to the press about their private conferences? May they criticize political candidates, speak at meetings of partisan legal organizations, or raise funds for charities? May they vacation with litigants in the middle of a pending case or comment on legal issues or proceedings in lower courts? May clerks and court staff be assigned to work on the justices’ private books and memoirs? These are not hypotheticals. At least one justice has engaged in each of these activities in past years, and there is no definitive code of conduct that prohibits them."

Tuesday, January 15, 2019

Princeton collaboration brings new insights to the ethics of artificial intelligence; Princeton University, January 14, 2019

Molly Sharlach, Office of Engineering Communications, Princeton University; Princeton collaboration brings new insights to the ethics of artificial intelligence

"Should machines decide who gets a heart transplant? Or how long a person will stay in prison?

The growing use of artificial intelligence in both everyday life and life-altering decisions brings up complex questions of fairness, privacy and accountability. Surrendering human authority to machines raises concerns for many people. At the same time, AI technologies have the potential to help society move beyond human biases and make better use of limited resources.

Princeton Dialogues on AI and Ethics” is an interdisciplinary research project that addresses these issues, bringing engineers and policymakers into conversation with ethicists, philosophers and other scholars. At the project’s first workshop in fall 2017, watching these experts get together and share ideas was “like nothing I’d seen before,” said Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP). “There was a vision for what this collaboration could be that really locked into place.”

The project is a joint venture of CITP and the University Center for Human Values, which serves as “a forum that convenes scholars across the University to address questions of ethics and value” in diverse settings, said director Melissa Lane, the Class of 1943 Professor of Politics. Efforts have included a public conference, held in March 2018, as well as more specialized workshops beginning in 2017 that have convened experts to develop case studies, consider questions related to criminal justice, and draw lessons from the study of bioethics.

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” said Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs. He emphasized that the careful implementation of AI systems can be an opportunity “to achieve better outcomes with less bias and less risk. It’s important not to see this as an entirely negative situation.”"

Top ethics and compliance failures of 2018; Compliance Week, December 17, 2018

Jaclyn Jaeger, Compliance Week; Top ethics and compliance failures of 2018 

 

"Shady data privacy practices, allegations of financial misconduct, and widespread money-laundering schemes make up Compliance Week’s list of the top five ethics and compliance failures of 2018. All impart some key compliance lessons."

Data Sheet—How the Tech Industry Needs to Evolve to Care More About People; Fortune, January 14, 2019

Aaron Pressman and Adam Lashinsky, Fortune; Data Sheet—How the Tech Industry Needs to Evolve to Care More About People

"Good morning from Redmond, Wash., where I’m spending the day soaking up some wisdom at Microsoft.

In preparation for my day I perused this “top 10 tech issues for 2019” post that Microsoft President Brad Smith wrote on LinkedIn, which Microsoft owns. I somehow expected this list to focus on the top commercial aspects of tech in the coming year. But that’s not what Smith, Microsoft’s top lawyer and policy executive who has written recently on the need for regulations around facial recognition, means by “issues.”

Instead, Smith is focused on the interplay between big technology companies and society. Topics like privacy, ethical artificial intelligence, protectionism, “disinformation,” and the human impacts of technology top his list.

The technology industry has been branded over the years as not caring all that much about people. Even the industry’s leading humanist, Steve Jobs, ultimately judged the success of his wares by whether they delighted customers, not if they were good for society. The industry is evolving.

I’ll share what I learn tomorrow."

Monday, January 14, 2019

Leading With Ethics; Forbes, January 7, 2019

Janine Schindler, Forbes; Leading With Ethics


"In today’s high-visibility world with the constant social media avalanche, it’s more important than ever to ensure that, as a leader, your ethical message is consistent. Anyone out there can talk the talk, but if you don’t truly believe in the importance of ethical behavior in your business career, it will become apparent to your employees, your peers and to the people occupying the C-suite.

If you’re searching for the answer to the ongoing dilemma of how to nurture an environment of trust, accountability and respect in the workplace, start with practicing ethical leadership in all levels of management.

To be an ethical leader, you must demonstrate ethical behavior — not just when others are looking, but all the time and over time. Consistently doing what's right, even when it's difficult, should be an integral part of a leader’s makeup. If you behave in an ethical manner when you’re in the spotlight, but avoid responsibility, cut corners and value profit above people behind closed doors, it is inevitable you’ll be found out."

Saturday, January 12, 2019

Trump’s bizarre statement on China dishonors us all; The Washington Post, January 11, 2019

Dana Milbank, The Washington Post; Trump’s bizarre statement on China dishonors us all

"Asked an unrelated question on the White House South Lawn on Thursday, Trump volunteered a comparison between Speaker Nancy Pelosi (D-Calif.) and Senate Minority Leader Charles E. Schumer (D-N.Y.) — and the leaders of the People’s Republic of China.

“I find China, frankly, in many ways, to be far more honorable than Cryin’ Chuck and Nancy. I really do,” he said. “I think that China is actually much easier to deal with than the opposition party.”

China, honorable?

China, which is holding a million members of religious minorities in concentration camps for “reeducation” by force?

China, which, according to Trump’s own FBI director, is, by far, the leading perpetrator of technology theft and espionage against the United States and is “using illegal methods” to “replace the U.S. as the world’s leading superpower”?

China, whose state-sponsored hackers were indicted just three weeks ago and accused of a 12-year campaign of cyberattacks on this and other countries?

China, whose ruling Communist Party has caused the extermination of tens of millions of people since the end of World War II, through government-induced famine, the ideological purges of the Cultural Revolution, and in mowing down reformers in Tiananmen Square?

Trump has a strange sense of honor. In April, he bestowed the same adjective on the world’s most oppressive leader, North Korea’s nuclear-armed dictator: “Kim Jong Un, he really has been very open and I think very honorable from everything we’re seeing.”

Now, the president is declaring that China’s dictatorship, by far the world’s biggest international criminal and abuser of human rights and operator of its most extensive police state, is more honorable than his political opponents in the United States.

In Trump’s view, your opponents are your enemies — and your actual enemies are your friends. How can you negotiate with a man who thinks like this?"

University Data Science Programs Turn to Ethics and the Humanities; EdSurge, January 11, 2019

Sydney Johnson, EdSurge; University Data Science Programs Turn to Ethics and the Humanities 

 

"These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula...

“You don't just throw algorithms at data. You need to look at it, understand how it was collected, and ask yourself: ‘How can I be responsible with the data and the people from which it came?’” says Cathryn Carson, a UC Berkeley historian with a background in physics who steered the committee tasked with designing the schools’ data-science curriculum.

The new division goes a step further than adding an ethics course to an existing program. “Computer science has been trying to catch up with the ethical implications of what they are already doing,” Carson says. “Data science has this built in from the start, and you’re not trying to retrofit something to insert ethics—it's making it a part of the design principle.”"