Thursday, January 31, 2019

State looking for a few ethical people; The Garden Island: Kauai's newspaper since 1901, January 31, 2019

Editorial, The Garden Island: Kauai's newspaper since 1901; State looking for a few ethical people

"Not so easy to say what’s ethical these days, as it seems to depends on one’s standards. Either way, if you’re one of those ethical people, the Hawaii State Ethics Commission wants to hear from you.

The Judicial Council is seeking applicants to fill one upcoming vacancy on the Hawaii State Ethics Commission. The term will run from July 1, 2019 through June 30, 2023...

Some of our brightest minds have tackled the significance of ethics. Here is what a few of them had to say:

w “History shows that where ethics and economics come in conflict, victory is always with economics. Vested interests have never been known to have willingly divested themselves unless there was sufficient force to compel them.” — B. R. Ambedkar

w “The first step in the evolution of ethics is a sense of solidarity with other human beings.” — Albert Schweitzer

w “Non-violence leads to the highest ethics, which is the goal of all evolution. Until we stop harming all other living beings, we are still savages.” — Thomas A. Edison

w “Ethics are more important than laws.” — Wynton Marsalis"

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."

Wednesday, January 30, 2019

Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’; The New York Times, January 29, 2019

Thomas L. Friedman, The New York Times; Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’


[Kip Currier: I just posted this Thomas L. Friedman New York Times piece as a MUST read for the students in my Information Ethics course. The "money quote" regarding the crux of the issue: 

"Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses."

And the call to action for all those who care and can do something, even if it is solely to raise awareness of the promise AND perils of these "deep" technologies:


"This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust."


Friedman leaves out another important "opening"--EDUCATION--and a critically important stakeholder group that is uniquely positioned and which must be ready and able to help prepare citizenry to critically evaluate "deep" technologies and information of all kinds--EDUCATORS.]
 

"Well, even though it’s early, I’m ready to declare the word of the year for 2019.

The word is “deep.”

Why? Because recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before. I’m talking about deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.

Some of these technologies offer unprecedented promise and some unprecedented peril — but they’re all now part of our lives. Everything is going deep...

Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses...

This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.

But deep trust and deep loyalty cannot be forged overnight. They take time. That’s one reason this old newspaper I work for — the Gray Lady — is doing so well today. Not all, but many people, are desperate for trusted navigators.

Many will also look for that attribute in our next president, because they sense that deep changes are afoot. It is unsettling, and yet, there’s no swimming back. We are, indeed, far from the shallow now."


Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones; The New York Times, January 29, 2019

Nicole Perlroth, The New York Times; Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones


"A bug this easy to exploit is every company’s worst security nightmare and every spy agency, cybercriminal and stalker’s dream. In emails to Apple’s product security team, Ms. Thompson noted that she and her son were just everyday citizens who believed they had uncovered a flaw that could undermine national security." 

“My fear is that this flaw could be used for nefarious purposes,” she wrote in a letter provided to The New York Times. “Although this certainly raises privacy and security issues for private individuals, there is the potential that this could impact national security if, for example, government members were to fall victim to this eavesdropping flaw."

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

New definition of privacy needed for the social media age; The San Francisco Chronicle, January 28, 2019

Jordan Cunningham, The San Francisco Chronicle; New definition of privacy needed for the social media age

"To bring about meaningful change, we need to fundamentally overhaul the way we define privacy in the social media age.

We need to stop looking at consumers’ data as a commodity and start seeing it as private information that belongs to individuals. We need to look at the impact of technology on young kids with developing brains. And we need to give consumers an easy way to ensure their privacy in homes filled with connected devices.

That’s why I’ve worked with a group of state lawmakers to create the “Your Data, Your Way” package of legislation."

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Meet the data guardians taking on the tech giants; BBC, January 29, 2019

Matthew Wall, BBC; Meet the data guardians taking on the tech giants

"Ever since the world wide web went public in 1993, we have traded our personal data in return for free services from the tech giants. Now a growing number of start-ups think it's about time we took control of our own data and even started making money from it. But do we care enough to bother?"

Big tech firms still don’t care about your privacy; The Washington Post, January 28, 2019

Rob Pegoraro, The Washington Post; Big tech firms still don’t care about your privacy

"Today is Data Privacy Day. Please clap.

This is an actual holiday of sorts, recognized as such in 2007 by the Council of Europe to mark the anniversary of the 1981 opening of Europe’s Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data — the grandfather of such strict European privacy rules as the General Data Protection Regulation.

In the United States, Data Privacy Day has yet to win more official acknowledgment than a few congressional resolutions. It mainly serves as an opportunity for tech companies to publish blog posts about their commitment to helping customers understand their privacy choices.

But in a parallel universe, today might feature different headlines. Consider the following possibilities."

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain; BoingBoing, January 27, 2019

Cory Doctorow, BoingBoing; Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain

"On Friday, hundreds of us gathered at the Internet Archive, at the invitation of Creative Commons, to celebrate the Grand Re-Opening of the Public Domain, just weeks after the first works entered the American public domain in twenty years.
 

I had the honor of delivering the closing keynote, after a roster of astounding speakers. It was a big challenge and I was pretty nervous, but on reviewing the saved livestream, I'm pretty proud of how it turned out.

Proud enough that I've ripped the audio and posted it to my podcast feed; the video for the keynote is on the Archive and mirrored to Youtube.

The whole event's livestream is also online, and boy do I recommend it."

Monday, January 28, 2019

Ethics as Conversation: A Process for Progress; MIT Sloan Management Review, January 28, 2019

R. Edward Freeman and Bidhan (Bobby). L. Parmar, MIT Sloan Management Review; Ethics as Conversation: A Process for Progress

"We began to use this insight in our conversations with executives and students. We ask them to define what we call “your ethics framework.” Practically, this means defining what set of questions you want to be sure you ask when confronted with a decision or issue that has ethical implications.

The point of asking these questions is partly to anticipate how others might evaluate and interpret your choices and therefore to take those criteria into account as you devise a plan. The questions also help leaders formulate the problem or opportunity in a more nuanced way, which leads to more effective action. You are less likely to be blindsided by negative reactions if you have fully considered a problem.

The exact questions to pose may differ by company, depending on its purpose, its business model, or its more fundamental values. Nonetheless, we suggest seven basic queries that leaders should use to make better decisions on tough issues."

Embedding ethics in computer science curriculum: Harvard initiative seen as a national model; Harvard, John A. Paulson School of Engineering and Applied Sciences, January 28, 2019

Paul Karoff, Harvard, John A. Paulson School of Engineering and Applied Sciences; Embedding ethics in computer science curriculum:
Harvard initiative seen as a national model

"Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”
 
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”"

Sunday, January 27, 2019

Andrew Gillum’s Florida Ethics Troubles Just Got Worse; Slate, January 25, 2019

Mark Joseph Stern, Slate; Andrew Gillum’s Florida Ethics Troubles Just Got Worse

"However Gillum chooses to proceed, it’s clear that Friday’s findings undermine his account and, by extension, his credibility. Throughout the campaign, he insisted that he paid his share of the lavish excursions and never accepted gifts from lobbyists. That narrative is now almost impossible to believe. True, Gillum never performed favors for lobbyists in exchange for their largesse, which would be a federal offense. But even without a quid pro quo, his cozy relationship with lobbyists did not seem to comport with Florida law.

Should Gillum run for office down the road, this blunder will likely be used as a cudgel, risking his ability to win a primary, let alone a general election. Perhaps it is too soon to write off his political career. But if he ever again throws his hat in the ring, his opponents will be ready to pounce with a sordid—and substantiated—tale of corruption."

Five myths about conspiracy theories; The Washington Post, January 17, 2019

Rob Brotherton; Rob Brotherton is an academic psychologist at Barnard College and author of “Suspicious Minds: Why We Believe Conspiracy Theories.”, The Washington Post; Five myths about conspiracy theories

"Conspiracy theories have always been around, but lately, they’ve been getting more attention. As the prevalence of conspiracy thinking among the electorate and even within the highest offices of government has become clear, conspiracism has inspired popular think-pieces and attracted scholars. Along the way, conspiracy theories have also inspired plenty of myths. Here are five."

Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019

Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?

"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...

Too often, we think only about increasing our competitiveness in terms of advancing the technology. But the effort can’t just be about making AI more powerful. It must also be about making sure AI has the right impact. AI’s greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare in which AI rapidly replaces human beings at many jobs and tasks. If we want to realize AI’s incredible potential, we must also advance AI in a way that increases the public’s confidence that AI benefits society. We must have a framework for addressing the impacts and the ethics.

What does an ethics-driven approach to AI look like?

It means asking not only whether AI be can used in certain circumstances, but should it?

Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.

As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."

 

Friday, January 25, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values; The New Yorker, January 24, 2019

Caroline Lester, The New Yorker; A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

"The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The decisions made by Germany will reverberate beyond its borders. Volkswagen sells more automobiles than any other company in the world. But that manufacturing power comes with a complicated moral responsibility. What should a company do if another country wants its vehicles to reflect different moral calculations? Should a Western car de-prioritize the young in an Eastern country? Shariff leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism. But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior."

Thursday, January 24, 2019

Drones unleashed against invasive rats in the Galápagos; Nature, January 24, 2019

Emma Marris, Nature; Drones unleashed against invasive rats in the Galápagos

"One advantage of using drones, Morley says, is that it reduces the need to cut trails through a forest to lay poison baits or traps. He is still working on ways to use the drones to monitor whether projects are successful, playing with acoustic, optical or other sensors that the drones could drop near the poison.

Using drones to kill could also change how conservation scientists view such work, Morley says, comparing the approach to modern warfare. “You used to be able to see your opponent. Now, you just a press a button and you fire a missile,” he says. “You become a little bit detached from the reality that you have killed something or somebody over there.”

That emotional distance could be seen as a benefit of the technology, or as a problem, says Chelsea Batavia, a scholar of conservation ethics at Oregon State University in Corvallis. She feels that people who kill animals for conservation should allow themselves to feel the moral weight of their actions, and even grieve. “Have a conversation about what you are doing and talk through that as a group,” she advises. “Let the impact of what you are doing hit you.”"

Drone Scare Near New York City Shows Hazard Posed to Air Travel; The New York Times, January 23, 2019

Patrick McGeehan and Cade Metz, The New York Times; Drone Scare Near New York City Shows Hazard Posed to Air Travel

"The disruption was all the more alarming because it came just one month after reported drone sightings caused the shutdown of Gatwick Airport in London, one of the busiest in Europe.

The upheaval at Newark illustrated how vulnerable the air-travel system is to the proliferation of inexpensive drones that can weigh as much as 50 pounds and are capable of flying high and fast enough to get in the path of commercial jets, experts on aviation safety and drone technology said. It also raised questions about whether airports are prepared enough to identify drones and prevent them from paralyzing travel and leaving passengers stranded.

“This is a really disturbing trend,” said John Halinski, former deputy administrator of the federal Transportation Security Administration. “It is a real problem because drones are multiplying every day. They really pose a threat in a number of ways to civil aviation.”"

I Found $90 in the Subway. Is It Yours?; The New York Times, January 24, 2019

Niraj Chokshi, The New York Times; I Found $90 in the Subway. Is It Yours?

"As I got off a train in Manhattan on Wednesday, I paid little attention to a flutter out of the corner of my eye on the subway. Then another passenger told me that I had dropped some money.

“That isn’t mine,” I told her as I glanced at what turned out to be $90 on the ground.

I realized the flutter had been the money falling out of the coat of a man standing near me who had just stepped off the train.

The doors were about to close, and no one was acting, so I grabbed the cash and left the train. But I was too late. The man had disappeared into the crowd. I waited a few minutes to see if he would return, but he was long gone. I tried to find a transit employee or police officer, but none were in sight.

I was running late, so I left. But now what? What are you supposed to do with money that isn’t yours?"

This Time It’s Russia’s Emails Getting Leaked; The Daily Beast, January 24, 2019

Kevin Poulsen, The Daily Beast; This Time It’s Russia’s Emails Getting Leaked

"Russian oligarchs and Kremlin apparatchiks may find the tables turned on them later this week when a new leak site unleashes a compilation of hundreds of thousands of hacked emails and gigabytes of leaked documents. Think of it as WikiLeaks, but without Julian Assange’s aversion for posting Russian secrets.

The site, Distributed Denial of Secrets, was founded last month by transparency activists. Co-founder Emma Best said the Russian leaks, slated for release on Friday, will bring into one place dozens of different archives of hacked material that at best has been difficult to locate, and in some cases appears to have disappeared entirely from the web...

Distributed Denial of Secrets, or DDoS, is a volunteer effort that launched last month. Its objective is to provide researchers and journalists with a central repository where they can find the terabytes of hacked and leaked documents that are appearing on the internet with growing regularity. The site is a kind of academic library or a museum for leak scholars, housing such diverse artifacts as the files North Korea stole from Sony in 2014, and a leak from the Special State Protection Service of Azerbaijan."

Trapped in a hoax: survivors of conspiracy theories speak out; The Guardian, January 24, 2019

, The Guardian; Trapped in a hoax: survivors of conspiracy theories speak out

"Conspiracy theories used to be seen as bizarre expressions of harmless eccentrics. Not any more. Gone are the days of outlandish theories about Roswell’s UFOs, the “hoax” moon landings or grassy knolls. Instead, today’s iterations have morphed into political weapons. Turbocharged by social media, they spread with astonishing speed, using death threats as currency.

Together with their first cousins, fake news, they are challenging society’s trust in facts. At its most toxic, this contagion poses a profound threat to democracy by damaging its bedrock: a shared commitment to truth...

Amid this explosive growth, one aspect has been under-appreciated: the human cost. What is the toll paid by those caught up in these falsehoods? And how are they fighting back?"

Tuesday, January 22, 2019

If Mark Zuckerberg Wants to Talk, Britain Is Waiting: Facebook leadership has a history of lashing out instead of opening up; The New York Times, January 22, 2019

Damian Collins, The New York Times; If Mark Zuckerberg Wants to Talk, Britain Is Waiting:

"Mr. Collins is a member of the British Parliament....

So much of our lives is organized through social media, and many people use social media platforms as the main source of information about the world around them. We cannot allow this public space to become a complete wild West, with little or no protection for the citizen user. The rights and responsibilities that we enjoy in the real world need to exist and be protected online as well."

The AI Arms Race Means We Need AI Ethics; Forbes, January 22, 2019

Kasia Borowska, Forbes; The AI Arms Race Means We Need AI Ethics

"In an AI world, the currency is data. Consumers and citizens trade data for convenience and cheaper services. The likes of Facebook, Google, Amazon, Netflix and others process this data to make decisions that influence likes, the adverts we see, purchasing decisions or even who we vote for. There are questions to ask on the implications of everything we access, view or read being controlled by a few global elite. There are also major implications if small companies or emerging markets are unable to compete from being priced out of the data pool. This is why access to AI is so important: not only does it enable more positives from AI to come to the fore, but it also helps to prevent monopolies forming. Despite industry-led efforts, there are no internationally agreed ethical rules to regulate the AI market."

Leading privacy scholar to speak in Dublin; Law Society Gazette Ireland, January 21, 2019

Law Society Gazette Ireland; Leading privacy scholar to speak in Dublin

"“Navigating Privacy in a Data Centric World” is the topic for a talk at Regent House in Trinity College later this month.

On Monday, 28 January at 4pm Jules Polonetsky of the Future of Privacy Forum (FPF) will give a public lecture on how almost every area of technical progress today is reliant on ever broader access to personal information.

Dr Jules Polonetsky is known for his book ‘A Theory of Creepy: Technology, Privacy and Shifting Social Norms’.

He believes that the rapid evolution of digital technologies has thrown up social and ethical dilemmas that we have hardly begun to understand.

Companies, academic researchers, governments and philanthropists utilise ever more sensitive data about individuals’ movements, health, online browsing, home activity and social interactions."

There’s hope for federal online privacy legislation; The Washington Post, January 21, 2019

Editorial Board, The Washington Post; There’s hope for federal online privacy legislation

"A FEW months ago, every day seemed to bring with it a new technology scandal. Now, each day seems to bring a new policy proposal to fix the problem. Even the companies support action from Congress. The gaps between the proposed solutions so far offer some insight into areas of agreement — and more important, disagreement — that will define the fight to come."

Dutch surgeon wins landmark 'right to be forgotten' case; The Guardian, January 21, 2019

Daniel Boffey, The Guardian; Dutch surgeon wins landmark 'right to be forgotten' case

"A Dutch surgeon formally disciplined for her medical negligence has won a legal action to remove Google search results about her case in a landmark “right to be forgotten” ruling...

Google and the Dutch data privacy watchdog, Autoriteit Persoonsgegevens, initially rejected attempts to have the links removed on the basis that the doctor was still on probation and the information remained relevant.

However, in what is said to be the first right to be forgotten case involving medical negligence by a doctor, the district court of Amsterdam subsequently ruled the surgeon had “an interest in not indicating that every time someone enters their full name in Google’s search engine, (almost) immediately the mention of her name appears on the ‘blacklist of doctors’, and this importance adds more weight than the public’s interest in finding this information in this way”...

The European court of justice established the “right to be forgotten” in a 2014 ruling relating to a Spanish citizen’s claim against material about him found on Google searches. It allows European citizens to ask search engines to remove links to “inadequate, irrelevant or … excessive” content. About 3 million people in Europe have since made such a request." 

Monday, January 21, 2019

Trademark Fight Over Vulgar Term’s ‘Phonetic Twin’ Heads to Supreme Court; The New York Times, January 21, 2019

Adam Liptak, The New York Times; Trademark Fight Over Vulgar Term’s ‘Phonetic Twin’ Heads to Supreme Court

"The Supreme Court apparently thinks the question is more complicated, as it agreed this month to hear the government’s appeal. If nothing else, the court can use Mr. Brunetti’s case to sort out just what it meant to say in the 2017 decision, which ruled for an Asian-American dance-rock band called the Slants. (The decision also effectively allowed the Washington Redskins football team to register its trademarks.)

The justices were unanimous in ruling that the prohibition on disparaging trademarks violated the First Amendment. But they managed to split 4 to 4 in most of their reasoning, making it hard to analyze how the decision applies in the context of the ban on scandalous terms."

Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China; The New York Times, January 21, 2019

Austin Ramzy and Sui-Lee Wee, The New York Times; Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China

"Dr. He’s announcement raised ethical concerns about the long-term effects of such genetic alterations, which if successful would be inherited by the child’s progeny, and whether other scientists would be emboldened to try their own gene-editing experiments.

Scientists inside and outside China criticized Dr. He’s work, which highlighted fears that the country has overlooked ethical issues in the pursuit of scientific achievement. The Chinese authorities placed Dr. He under investigation, during which time he has been kept under guard at a guesthouse at the Southern University of Science and Technology in the city of Shenzhen."

Once Centers Of Soviet Propaganda, Moscow's Libraries Are Having A 'Loud' Revival; NPR, January 21, 2019

Lucian Kim, NPR; Once Centers Of Soviet Propaganda, Moscow's Libraries Are Having A 'Loud' Revival

"In recent years, the city's team in charge of libraries has discarded almost all traditional concepts of what a public library is.

"We have a different idea from the way things used to be. A library can be a loud place," says Maria Rogachyova, the official who oversees city libraries. "Of course there should be some quiet nooks where you can focus on your reading, but our libraries also host a huge amount of loud events."...

The library now has its own website, Facebook page and even YouTube channel.

"Moscow libraries aren't competing with modern technology, they're trying to use it," says Rogachyova. "The rise of electronic media shouldn't spell the death of libraries as public spaces.""

The ethics of gene editing: Lulu, Nana, and 'Gattaca; Johns Hopkins University, January 17, 2019


Saralyn Cruickshank, Johns Hopkins University; The ethics of gene editing: Lulu, Nana, and 'Gattaca

"Under the direction of Rebecca Wilbanks, a postdoctoral fellow in the Berman Institute of Bioethics and the Department of the History of Medicine, the students have been immersing themselves during in the language and principles of bioethics and applying what they learn to their understanding of technology, with an emphasis on robotics and reproductive technology in particular.

To help them access such heady material, Wilbanks put a spin on the course format. For the Intersession class—titled Science Fiction and the Ethics of Technology: Sex, Robots, and Doing the Right Thing—students explore course materials through the lens of science fiction.

"We sometimes think future technology might challenge our ethical sensibilities, but science fiction is good at exploring how ethics is connected to a certain way of life that happens to include technology," says Wilbanks, who is writing a book on how science fiction influenced the development of emerging forms of synthetic biology. "As our way of life changes together with technology, so might our ethical norms.""