Wednesday, March 13, 2019

Bribes to Get Into Yale and Stanford? What Else Is New?; The New York Times, March 12, 2019

Frank Bruni, The New York Times;

Bribes to Get Into Yale and Stanford? What Else Is New?

A new college admissions scandal is just the latest proof of a grossly uneven playing field.

"While colleges pledge fairer admissions and more diverse student bodies, they don’t patrol what’s going on with nearly enough earnestness and energy to honor that promise. They’re ripe to be gamed because the admissions process is a game."

Malcolm Gladwell: Plagiarism Is Just ‘Bad Manners’; The Daily Beast, March 11, 2019

Marlow Stern, The Daily Beast; Malcolm Gladwell: Plagiarism Is Just ‘Bad Manners’

"You don’t think plagiarism is a journalistic sin? 

I mean, it’s bad manners. Who cares. Someone can use all the words of mine they want. What I get angry at is when I have an idea that I think is original and consequential, and someone steals it and doesn’t credit me, that makes me mad. But if you want to go through my books and you find a paragraph, and you think that paragraph describes something really well and want to stick it in your book, go ahead! 

So if I wanted to release Blink under my name you wouldn’t sue my ass into oblivion? I’m kidding obviously. 

Well, not the whole book! But the book is an idea. There’s a story in Blink about a Marine Corps general, and if you think it’s a great paragraph and you want to take it, I hope you credit me. But if you don’t credit me, am I going to knock on your door and ask for you to be fired from your job? No! Life goes on, man. People have to have some sense of judgment about these things. These are not crimes; they are misdemeanors. If I saw you jaywalking, would I ask for you to be fired from your job? 

You don’t think a journalist should be fired for plagiarism? 

It’s bad manners. I don’t think the person who plagiarized me should have lost her job. I don’t care."

College cheating scandal is the tip of the iceberg; CNN, March 12, 2019

David Perry, CNN; College cheating scandal is the tip of the iceberg

"We're not talking about donating a building, we're talking about fraud," said Andrew Lelling, the US Attorney for Massachusetts, as he announced indictments in a massive scheme alleging that celebrities and other wealthy individuals used cheating, bribes, and lies to get their kids into elite colleges.

The behavior described in this alleged fraud should be punished. But on a broader and more basic level, the case also sheds light on deep inequities in our college admissions system. Because if someone can get their kid into Harvard by buying a building, let alone by committing any of the alleged acts emerging from this case, the scandal isn't just what's illegal, but what's legal as well. "

Thursday, March 7, 2019

Scientists Raise Concerns About Revisions to Human Research Regulations; The Scientist, February 19, 2019

Katarina Zimmer, The Scientist; Scientists Raise Concerns About Revisions to Human Research Regulations

"When Henrietta Lacks visited the Johns Hopkins Medical Center in the 1950s to be treated for cervical cancer, she had no idea that some of her cancer cells would be used to create one of the most scientifically valuable and financially profitable cell lines that is used in labs today. Nor was she asked for permission.

Lacks’s experience has become nationally acknowledged as a shameful episode in the history of biomedical research in the US—particularly after the publication of a popular book about Lacks and her family—and forced the scientific community to consider how to conduct ethical research with human samples. The case was one of the reasons for a heated debate during a recent, six-year-long process of revising the Common Rule, a package of regulations adopted in the 1990s intended to ensure that all federally funded research conducted on human subjects is done ethically.

The revisions, enacted last month, are an attempt to strike a better balance between patients’ need for privacy and the benefits of using their tissue for research. In a paper published January 31 in JAMA Oncology, a group of clinicians and ethicists from the University of Michigan and the University of Pennsylvania argue that the revisions could have unintended consequences for research with various types of biospecimens, and propose that regulators should consider them differently when creating research protections."

Does AI Ethics Have A Bad Name?; Forbes, March 7, 2019

Calum Chace, Forbes; Does AI Ethics Have A Bad Name?

"One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent...

The issues explored in the field of AI ethics are important but it would help to clarify them if some of the heat was taken out of the discussion.  It might help if instead of talking about AI ethics, we talked about beneficial AI and AI safety.  When an engineer designs a bridge she does not finish the design and then consider how to stop it from falling down.  The ability to remain standing in all foreseeable circumstances is part of the design criteria, not a separate discipline called “bridge ethics”. Likewise, if an AI system has deleterious effects it is simply a badly designed AI system.

Interestingly, this change has already happened in the field of AGI research, the study of whether and how to create artificial general intelligence, and how to avoid the potential downsides of that development, if and when it does happen.  Here, researchers talk about AI safety. Why not make the same move in the field of shorter-term AI challenges?"

A university gallery showed art with Confederate imagery. Then students called to remove it.; The Washington Post, February 26, 2019

Mark Lynn Ferguson, The Washington Post; A university gallery showed art with Confederate imagery. Then students called to remove it.

"Joy Garnett, program associate for the National Coalition Against Censorship, said the school had other options than taking the art down. It could have provided more context around the exhibit, such as temporary dividers to conceal the art and signs cautioning visitors on the difficult subject matter. After the exhibit closed, Baldwin did hold listening sessions, but only students and faculty were allowed to attend, according to school spokeswoman Liesel Crosier. The sessions, argued Jonathan Friedman, project director for campus free speech at PEN America, a nonprofit devoted to defending freedom of speech, “would have likely been much richer if the exhibit were able to continue.”

Garnett also found fault with the artists, who she said need to understand the communities where they are showing their work. More than half of Baldwin’s residential students are not white. “It’s not about avoiding offending people,” Garnett said. “It’s about how do you couch the offense in a way that’s productive.”

Graduate students explore the ethics of artificial intelligence; Princeton University, February 28, 2019

Denise Valenti for the Office of Communications, Princeton University; Graduate students explore the ethics of artificial intelligence

"As artificial intelligence advances, the questions surrounding its use have become increasingly complex. To introduce students to the challenges the technology could present and to prepare them to engage in and lead conversations about its ethical use, the Graduate School this year is offering a Professional Learning Development Cohort titled “Ethics of AI.”

This cohort offering is part of the Graduate School’s larger commitment to equip students with skills they can apply across a full range of professional settings in which they may make important contributions after leaving Princeton.

Nineteen graduate students from various disciplines — including psychology, politics, mechanical and aerospace engineering, and quantitative and computational biology — are participating in the five-part learning series. Through presentations, case studies, readings and discussions, they are developing an awareness of the issues at stake and considering their application in real-world situations.

“A recurring theme I hear from leaders in the technology industry is that there is a growing need for people who can engage rigorously with fundamental ethical issues surrounding technological advances,” said Sarah-Jane Leslie, dean of the Graduate School. “A great many of Princeton’s graduate students are exceptionally well-placed to contribute precisely that robust ethical thinking, so we wanted to provide a forum for our students to deepen their knowledge of these issues.”"

Wednesday, March 6, 2019

Open data needs ethical, efficient management; University of Cape Town News, March 6, 2019

Helen Swingler, University of Cape Town News; Open data needs ethical, efficient management

"Ethics in data management

Niklas Zimmer, manager of digital services at UCT Libraries, said that ethical management of data is key. Several of the lightning presentations made at the event underscored this.
UCT Gender Health and Justice Research Unit (GHJRU) research officer Kristen Daskilewicz cited an important example when she said the use of open data is not always appropriate for research where there are heightened safety concerns.
Her example described a collaborative two-year cross-sectional research project on LGBTI health, safety and other rights that the unit undertook on behalf of the Southern and Eastern Africa Research Collective on Health (SEARCH). SEARCH is a collective of 23 civil society organisations in nine countries.
The project participants had to be “very careful” with data collection and dissemination in the study countries, particularly those where aspects of same-sex relationships have been criminalised. There were concerns about protecting the survey participants and the unit’s civil society partners, who were the data collectors."

UC open access fight exposes publishing rip-off: Charging exorbitant fees for journal articles isn’t in the best interests of scientific research, Mercury News, March 6, 2019

Editorial: UC open access fight exposes publishing rip-off

Charging exorbitant fees for journal articles isn’t in the best interests of scientific research


"The scholarly research publishing industry is a rip-off that hinders scientific advances and piles unnecessary costs onto taxpayers who already fund much of the academic work.

It’s ridiculous that, in this age of the internet, researchers are paying huge fees for access to academic papers and for publication of their own work. That made sense in the days when scholarly works were printed in bound volumes. Today, academic work, especially public- and foundation-funded research, should be open for all. It shouldn’t cost $35 to $40 for each article, effectively freezing out those without the means to pay...

The University of California’s mission statement reads: “The distinctive mission of the university is to serve society as a center of higher learning, providing long-term societal benefits through transmitting advanced knowledge, discovering new knowledge, and functioning as an active working repository of organized knowledge.”
UC’s commitment to open access helps fulfill that goal and advances scientific enterprise for the benefit of all."

Making a path to ethical, socially-beneficial artificial intelligence, MIT News, March 5, 2019

School of Humanities, Arts, and Social Sciences, MIT News; Making a path to ethical, socially-beneficial artificial intelligence

Leaders from government, philanthropy, academia, and industry say collaboration is key to make sure artificial intelligence serves the public good.

"Many speakers at the three-day celebration, which was held on Feb. 26-28, called for an approach to education, research, and tool-making that combines collective knowledge from the technology, humanities, arts, and social science fields, throwing the double-edged promise of the new machine age into stark relief...

The final panel was “Computing for the People: Ethics and AI,” moderated by New York Timescolumnist Thomas Friedman. In a conversation afterward, Nobles also emphasized that the goal of the new college is to advance computation and to give all students a greater “awareness of the larger political, social context in which we’re all living.” That is the MIT vision for developing “bilinguals” — engineers, scholars, professionals, civic leaders, and policymakers who have both superb technical expertise and an understanding of complex societal issues gained from study in the humanities, arts, and social science fields.  

The perils of speed and limited perspective
 
The five panelists on “Computing for the People” — representing industry, academia, government, and philanthropy — contributed particulars to the vision of a society infused with those bilinguals, and attested to the perils posed by an overly-swift integration of advanced computing into all domains of modern existence.
 
"I think of AI as jetpacks and blindfolds that will send us careening in whatever direction we're already headed," said Joi Ito, director of the MIT Media Lab. "It's going to make us more powerful but not necessarily more wise."


The key problem, according to Ito, is that machine learning and AI have to date been exclusively the province of engineers, who tend to talk only with each other. This means they can deny accountability when their work proves socially, politically, or economically destructive. "Asked to explain their code, technological people say: ‘We're just technical people, we don't deal with racial or political problems,’" Ito said." 

The ethical side of big data; Statistics Netherlands, March 4, 2019

Masja de Ree, Statistics NetherlandsThe ethical side of big data

"The power of data

Why do we need to highlight the importance of ethical data use? Dechesne explains: ‘I am a mathematician. My world is a world of numbers. My education did not put much emphasis on the power of data in our society, however. Numbers frequently have a veneer of objectivity, but any conclusions drawn on the basis of data are always contingent on the definitions maintained and the decisions made when designing a research project. These choices can have a huge impact on certain groups in our society. This is something we need to be aware of. Decisions have to be made. That is fine, of course, as long as everyone is mindful and transparent when making decisions.’"

Teen who defied anti-vax mom says she got false information from one source: Facebook; The Washington Post, March 5, 2019

Michael Brice-Saddler, The Washington Post; Teen who defied anti-vax mom says she got false information from one source: Facebook

"An 18-year-old from Ohio who famously inoculated himself against his mother’s wishes in December says he attributes his mother’s anti-vaccine ideology to a single source: Facebook.

Ethan Lindenberger, a high school senior, testified Tuesday before the Senate Committee on Health, Education, Labor and Pensions, and underscored the importance of “credible” information. In contrast, he said, the false and deep-rooted beliefs his mother held — that vaccines were dangerous — were perpetuated by social media. Specifically, he said, she turned to anti-vaccine groups on social media for evidence that supported her point of view.

In an interview with The Washington Post on Tuesday, Lindenberger said Facebook, or websites that were linked on Facebook, is really the only source his mother ever relied on for her anti-vaccine information."

Olympic champion shares personal experience on the importance of ethics; NTVabc, March 5, 2019

Lauren Kummer, NTVabcOlympic champion shares personal experience on the importance of ethics

"On Tuesday, it was Ethics Day at the University of Nebraska at Kearney.
Naber spoke to students on character and ethics in a way that's relevant to everyday life.
"I think it's important to talk about what's in the best common good. Not in what's in your best interest but what is in our best interest," said Naber.

Naber shared stories on his own, and one in particular that put him in a tough situation during the 1973 World Team Trials where ethics came into question.

"I won the race but I didn't touch the wall correctly. The official thought I should be disqualified. The meet referee wasn't sure and they let me decide. Did I intend to fight the call? I remembered I didn't touch the wall. I said "I deserve to be disqualified" and I was. For that, I lost the chance to win a gold medal at the world championships but I earned my own self-respect. Of all the decisions I made in my swimming and athletic career I think that might be the highlight," said Naber."

Monday, March 4, 2019

Should This Exist? The Ethics Of New Technology; NPR, March 3, 2019

Lulu Garcia-Navarro, NPR; Should This Exist? The Ethics Of New Technology

"In fact, the 2016 election helped raise awareness of an issue that Flickr co-founder Caterina Fake has been talking about in Silicon Valley for years — the ethics of technology.

That conversation was furthered by OpenAI's decision to publicize the nonrelease of their new technology last month, Fake told NPR's Lulu Garcia-Navarro.

"Tech companies don't launch products all the time, but it's rare that they announce that they're not launching a product, which is what has happened here," Fake said. "The announcement of not launching this product is basically to involve people in the conversation around what is and what is not dangerous tech."

When evaluating potential new technology, Fake asks a fundamental question: should this exist?

It's a question she explores as host of the podcast Should This Exist?"

Seeking Ground Rules for A.I.; The New York Times, March 1, 2019

Cade Metz, The New York Times; Seeking Ground Rules for A.I.

It’s not easy to encourage the ethical use of artificial intelligence. But here are 10 recommendations.

"The Recommendations 

Transparency Companies should be transparent about the design, intention and use of their A.I. technology.
Disclosure Companies should clearly disclose to users what data is being collected and how it is being used.
Privacy Users should be able to easily opt out of data collection.
Diversity A.I. technology should be developed by inherently diverse teams.
Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
Trust Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
Accountability There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
Collective governance Companies should work together to self-regulate the industry.
Regulation Companies should work with regulators to develop appropriate laws to govern the use of A.I.
“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.

The leaders of the groups: Frida Polli, a founder and chief executive, Pymetrics; Sara Menker, founder and chief executive, Gro Intelligence; Serkan Piantino, founder and chief executive, Spell; Paul Scharre, director, Technology and National Security Program, The Center for a New American Security; Renata Quintini, partner, Lux Capital; Ken Goldberg, William S. Floyd Jr. distinguished chair in engineering, University of California, Berkeley; Danika Laszuk, general manager, Betaworks Camp; Elizabeth Joh, Martin Luther King Jr. Professor of Law, University of California, Davis; Candice Morgan, head of inclusion and diversity, Pinterest"

Is Ethical A.I. Even Possible?; The New York Times, March 1, 2019

Cade Metz, The New York Times; Is Ethical A.I. Even Possible?

"As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation."

Saturday, March 2, 2019

Beatle late than never – stolen 'Life' magazine featuring Fab 4 returned to library after 50 years; News 5 Cleveland, February 28, 2019

Ian Cross, News 5 Cleveland; Beatle late than never – stolen 'Life' magazine featuring Fab 4 returned to library after 50 years

"Cuyahoga County Public Library representatives may be doing the twist and shout after they received the Life magazine from 1968, along with a money order for $100 and a letter that read: 

“I stole this magazine from the Parma Ridge Road Library when I was a kid. I’m sorry I took it. I’ve enclosed a check for the late fee.” 

In a Facebook post, library officials thanked the anonymous paperback burglar for returning the “borrowed” magazine that was able to get back to where it once belonged."

Medical students wanted: Only the ethical need apply; The Boston Globe, February 28, 2019

, The Boston Globe; Medical students wanted: Only the ethical need apply

"Acceptance to medical school is notoriously difficult. You need to have an exceptional GPA and high Medical College Admission Test (MCAT) scores just to get an interview. Now, it’s getting even harder. Admissions officers have added a new kind of test to their screening arsenal, one that could change the face of medicine.

Since 2015, more than two dozen medical schools across the United States have embraced a test of interpersonal skills known as the CASPer (Computer Based Assessment for Sampling Personal Characteristics) test. The exam determines students’ levels of compassion and ethics — two qualities that many believe are critical to a physician’s success.

“As a society,” says Dore, “we know that strong academic skills aren’t the only trait we value in our doctors. We want them to be excellent communicators, have a strong moral sense, and be able to be empathetic across a variety of situations.”"

University of Texas can’t take away student’s PhD; C & En, Chemical & engineering News, February 22, 2019

Bethany Halford, C & En, Chemical & engineering News; University of Texas can’t take away student’s PhD
"The University of Texas at Austin does not have the authority to revoke a student’s degree, according to a Feb. 11 ruling by Judge Karin Crump in Travis County, Texas district court. The judgment is the latest turn in the university’s years-long effort to strip Suvi Orr of her doctorate in chemistry. 
Orr began her graduate studies in organic synthesis in Stephen Martin’s lab in 2003. In 2008 she successfully defended her thesis. But six years later, UT Austin sent a certified letter to Orr saying the school was invalidating her thesis based on research misconduct. The university cited a 2011 Organic Letters paper that was retracted in 2012 (DOI: 10.1021/ol302236g) because two steps in the synthesis could not be reproduced.
Orr, now a senior principal scientist at Pfizer, denies any wrongdoing."

Blockchain could ensure the integrity of scientific research trials; Digital Trends, February 23, 2019

, Digital Trends; Blockchain could ensure the integrity of scientific research trials


Researchers at the University of California, San Fransico (UCSF) created a proof of concept that shows how the integrity of clinical trial data can be protected and proven using blockchain. Blockchain allows users to track the changes made to any portion of the data entered into it, making an audit trail for regulators which can be checked for any inconsistencies. This would make it obvious if, for example, a researcher changes certain values in their data set to come to the conclusion that they wanted."
 

‘Mockingbird’ Producer Reconsiders, Letting Local Plays Go Forward; The New York Times, March 1, 2019

Michael Paulson and Alexandra Alter, The New York Times; ‘Mockingbird’ Producer Reconsiders, Letting Local Plays Go Forward


"Mr. Rudin defended his actions in a brief statement, saying, “As stewards of the performance rights of Aaron Sorkin’s play, it is our responsibility to enforce the agreement we made with the Harper Lee estate and to make sure that we protect the extraordinary collaborators who made this production.”

But he also blamed the situation on the Dramatic Publishing Company, which is run by Christopher Sergel III, Mr. Sergel’s grandson, saying it had erred in issuing licenses to present the play to theaters that should not have received them. Mr. Rudin has argued that a 1969 agreement between Ms. Lee, the author of the novel, and Dramatic Publishing bars productions by theaters within 25 miles of a city that in 1960 had a population of more than 150,000 people, as well as productions using professional actors, when a “first-class” production is running on Broadway or on tour.

“We have been hard at work creating what I hope might be a solution for those theater companies that have been affected by this unfortunate set of circumstances, in which rights that were not available to them were licensed to them by a third party who did not have the right to do so,” Mr. Rudin said. “In an effort to ameliorate the hurt caused here, we are offering each of these companies the right to perform our version of ‘To Kill a Mockingbird,’ Aaron Sorkin’s play currently running on Broadway.”...

“Unfortunately this issue has been the shot heard ’round the fine arts world over recent days,” said Davis Varner, the president of the board of the Theater of Gadsden, a community theater in Alabama that is planning to stage the Sergel version this month. The theater is not near a big city, so its rights appear to be unchallenged, but Mr. Varner issued a statement referring to Mr. Rudin as “the bully from Broadway” and said, “I am saddened and disappointed for those groups who have been forced to cancel their productions through no fault of their own.”

Others took to social media to vent their unhappiness.





Friday, March 1, 2019

Mongols Biker Club Can Keep Its Logo, Judge Rules; The New York Times, February 28, 2019

Louis Keene and Serge F. Kovaleski, The New York Times; Mongols Biker Club Can Keep Its Logo, Judge Rules

"Nearly two months after a federal jury decided that a notorious motorcycle club must forfeit the rights to its trademarked emblem, a judge on Thursday nullified the verdict, finding that seizure of the intellectual property was unconstitutional. 

In a 51-page ruling, Federal District Judge David O. Carter said the government’s strategy of trying to devastate the Mongols motorcycle club by confiscating its treasured Genghis Khan-style logo would violate the group’s First Amendment right to free speech and the excessive fines clause of the Eighth Amendment."

University of California boycotts publishing giant Elsevier over journal costs and open access; Science, February 28, 2019

Alex Fox, Jeffrey Brainard, Science; University of California boycotts publishing giant Elsevier over journal costs and open access

"The mammoth University of California (UC) system announced today it will stop paying to subscribe to journals published by Elsevier, the world’s largest scientific publisher, headquartered in Amsterdam. Talks to renew a collective contract broke down, the university said, because Elsevier refused to strike a package deal that would provide a break on subscription fees and make all articles published by UC authors immediately free for readers worldwide.

The stand by UC, which followed 8 months of negotiations, could have significant impacts on scientific communication and the direction of the so-called open-access movement, in the United States and beyond. The 10-campus system accounts for nearly 10% of all U.S. publishing output and is among the first U.S. institutions, and by far the largest, to boycott Elsevier over costs. Many administrators and librarians at U.S. universities and elsewhere have complained about what they view as excessively high journal subscription fees charged by commercial publishers."

Jill Abramson Plagiarized My Writing. So I Interviewed Her About It; Rolling Stone, February 13, 2019

Jake Malooley, Rolling Stone;

Jill Abramson Plagiarized My Writing. So I Interviewed Her About It


When journalist Jake Malooley talked to the former New York Times executive editor, she admitted only to minor mistakes — but her responses were revealing

[Kip Currier: In yesterday's Information Ethics class session, looking at Plagiarism, Attribution, and Research Integrity and Misconduct, we explored this illuminating 2/13/19 interview of Jill Abramson--veteran journalist and the former first-ever female Executive Editor of The New York Times from 2011 until her firing in 2014--by Rolling Stone reporter Jake Malooley.

I also played the first ten minutes of a 2/20/19 radio interview of Abramson by WNYC's Brian Lehrer, in which Abramson fields questions from Lehrer about her ongoing plagiarism controversy and research/writing process.

The Abramson plagiarism controversy is a rich ripped-from-the-headlines case study, emphasizing the importance and implications of plagiarism and research integrity and misconduct. Imagine being in Abramson's Harvard University class this term, where the 1976 Harvard FAS alumna is teaching an Introduction to Journalism course...

Speaking of Harvard, The Harvard Crimson has an interesting 2/15/19 article on the continuing Abramson controversy, as well as prior instances of alleged plagiarism by a trio of prestigious Harvard professors in the early 2000's, who, following investigations, "faced no public disciplinary action": Current Policy, Past Investigations Offer Window Into Harvard’s Next Steps In Abramson Plagiarism Case]


"In the days that followed, Abramson gave interviews to Vox and CNN. She unconvincingly sidestepped definitions of plagiarism upheld by the Times and Harvard, contending she is guilty of little more than sloppiness. She also claimed Vice is “waging an oppo campaign” against her book. Amid all the equivocation and attempts to duck the plagiarist label, Abramson still had not sufficiently explained how my writing and that of several other journalists ended up running nearly word-for-word in her book. I didn’t feel personally aggrieved, as some colleagues believed I rightfully should. But I did think I was owed straight answers. So late last week, I requested an interview with Abramson through Simon & Schuster, the publisher of Merchants of Truth.


On Monday afternoon, Abramson phoned me from Harvard’s campus, where she would be teaching an introduction to journalism seminar. According to the syllabus for Abramson’s Spring 2019 workshop “Journalism in the Age of Trump,” a copy of which a student, Hannah Gais, tweeted, Merchants of Truth is assigned as required reading...
This interview has been condensed for length.
Correction: This article previously stated that Abramson was on her way to her Spring 2019 workshop, “Journalism in the Age of Trump.” It has been corrected to clarify that she was on her way to an introduction to journalism class."


Thursday, February 28, 2019

Michael Cohen just breached Trump’s GOP stone wall; The Washington Post, February 27, 2019

E.J. Dionne Jr., The Washington Post; Michael Cohen just breached Trump’s GOP stone wall

"Nothing Trump does should surprise us anymore, yet it was still shocking that the man who holds an office once associated with the words “leader of the free world” would refer to a murderous dictator as “my friend.” It’s clear by now that Trump feels closest to autocrats and is uneasy with truly democratic leaders, as Germany’s Chancellor Angela Merkel, among others, has learned.

The president’s apparatchiks also gave us an instructive hint as to what an unrestrained Trump might do to the free press. They excluded White House reporters Jonathan Lemire of the Associated Press and Jeff Mason of Reuters from the press pool covering the dinner between Trump and Kim for daring to ask inconvenient questions of our country’s elected leader. This wasn’t the work of Kim or Vietnam’s authoritarian government. It was the imperious action of a man who wishes he could live without the accountability that free government imposes...

Their fear that this might happen again is why House Republicans worked so hard to delegitimize Wednesday’s hearing. They and Trump would prefer Congress (and the media) to leave us in the dark. Fortunately, we do not live in North Korea."

Tuesday, February 26, 2019

New Research Study Describes DNDi As A “Commons” For Public Health; Intellectual Property Watch, February 25, 2019

David Branigan, Intellectual Property Watch; New Research Study Describes DNDi As A “Commons” For Public Health

"Since 2003, Drugs for Neglected Diseases Initiative (DNDi) has worked to meet the public health needs of neglected populations by filling gaps in drug development left by the for-profit pharmaceutical industry. A new research study by the French Development Agency analysed DNDi’s unique product development partnership (PDP) model, and found that it “illustrate[s] what can be presented as a ‘commons’ within the area of public health."

The research study, “DNDi, a Distinctive Illustration of Commons in the Area of Public Health,” was published earlier this month by the Agence Française de Développement (AFD), the French public development bank that “works in many sectors — energy, healthcare, biodiversity, water, digital technology, professional training, among others — to assist with transitions towards a safer, more equitable, and more sustainable world: a world in common,” according to its website."

When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019

Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?

"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...

It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models...

Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol... 

“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says [Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with it.”"

Fixing Tech’s Ethics Problem Starts in the Classroom; The Nation, February 21, 2019

Stephanie Wykstra, The Nation; Fixing Tech’s Ethics Problem Starts in the Classroom

 

"Casey Fiesler, a faculty member in the Department of Information Science at the University of Colorado Boulder, said that a common model in engineering programs is a stand-alone ethics class, often taught towards the end of a program. But there’s increasingly a consensus among those teaching tech ethics that a better model is to discuss ethical issues alongside technical work. Evan Peck, a computer scientist at Bucknell University, writes that separating ethical from technical material means that students get practice “debating ethical dilemmas…but don’t get to practice formalizing those values into code.” This is a particularly a problem, said Fiesler, if an ethics class is taught by someone from outside a student’s field, and the professors in their computer-science courses rarely mention ethical issues. On the other hand, classes focused squarely on the ethics of technology allow students to dig deeply into complicated questions. “I think the best solution is to do both…but if you can’t do both, incorporating [ethics material into regular coursework] is the best option,” Fiesler said."

 

Sunday, February 24, 2019

Pop Culture, AI And Ethics; Forbes, February 24, 2019

; Pop Culture, AI And Ethics

"In this article, I would like to take the opportunity to do a deep dive into three of the show’s episodes and offer a Design Thinking framework for how to adopt a thoughtful approach on AI implementations. Warning- there are spoilers!...

We need to continuously ask ourselves these 4 questions: How can humanity benefit from this AI/tech? What products and services can you imagine in this space? How might AI be manipulated, or unintended consequences lead to harmful outcomes? What are the suggestions for a responsible future?"

Saturday, February 23, 2019

China Uses DNA to Track Its People, With the Help of American Expertise; The New York Times, February 21, 2019

Sui-Lee Wee, The New York Times;

China Uses DNA to Track Its People, With the Help of American Expertise

The Chinese authorities turned to a Massachusetts company and a prominent Yale researcher as they built an enormous system of surveillance and control.

"Mr. Imin was one of millions of people caught up in a vast Chinese campaign of surveillance and oppression. To give it teeth, the Chinese authorities are collecting DNA — and they got unlikely corporate and academic help from the United States to do it."

Netflix Is the Most Intoxicating Portal to Planet Earth; The New York Times, February 22, 2019

Farhad Manjoo, The New York Times;

Netflix Is the Most Intoxicating Portal to Planet Earth

Instead of trying to sell American ideas to a foreign audience, it’s aiming to sell international ideas to a global audience.

"Netflix’s push abroad has not been without incident. Late last year, the company earned international condemnation for pulling an episode of “Patriot Act With Hasan Minhaj” from its service in Saudi Arabia. The comedian had criticized the Saudi crown prince, Mohammed bin Salman, after the C.I.A.’s conclusion that the prince had ordered the murder of Jamal Khashoggi, the dissident Saudi journalist.

Netflix argued that it had no choice but to obey the Saudi legal authority, which said the episode violated a statute, if it wanted to continue operating in that country. The company’s executives suggested that bringing the Saudis the rest of Netflix — every other episode of “Patriot Act” or shows that explore issues of gender and sexuality, like “Big Mouth” and “Sex Education” and “Nanette” — was better than having the entire service go dark in that country."

Thursday, February 21, 2019

How Do You Preserve History On The Moon?; NPR, February 21, 2019

Nell Greenfieldboyce, NPR; How Do You Preserve History On The Moon?

"Any nation can nominate a place within its sovereign territory to be included on the World Heritage List, she explains. The trouble with the moon is that, according to the 1967 Outer Space Treaty, no nation can claim sovereignty over anything in outer space.

This legal gray area is why Hanlon wants the U.N. space panel to issue some kind of declaration stating that the Apollo 11 landing site has unparalleled cultural importance that deserves special recognition.

The question is whether countries will be willing to agree on that kind of small step for preservation, or whether they'll balk at setting any precedent for putting part of the moon off-limits."

Wednesday, February 20, 2019

The Lab Discovering DNA in Old Books Artifacts have genetic material hidden inside, which can help scientists understand the past.; The Atlantic, February 19, 2019

Sarah Zhang, The Atlantic;

The Lab Discovering DNA in Old Books


"Artifacts have genetic material hidden inside, which can help scientists understand the past.

"But Collins isn’t just interested in human remains. He’s interested in the things these humans made; the animals they bred, slaughtered, and ate; and the economies they created.

That’s why he was studying DNA from the bones of livestock—and why his lab is now at the forefront of studying DNA from objects such as parchment, birch-bark tar, and beeswax. These objects can fill in gaps in the written record, revealing new aspects of historical production and trade. How much beeswax came from North Africa, for example? Or how did cattle plague make its way through Europe? With ample genetic data, you might reconstruct a more complete picture of life hundreds of years in the past."

How do you get anti-vaxxers to vaccinate their kids? Talk to them — for hours.; The Washington Post, February 19, 2019

Nadine Gartner, The Washington Post; How do you get anti-vaxxers to vaccinate their kids? Talk to them — for hours.

"My independent nonprofit, Boost Oregon, has found a way to reach these families by giving them an opportunity to learn about vaccines directly from medical professionals. The response has been overwhelmingly positive. In exit surveys, the vast majority of people who attend our workshops say they’ve decided to vaccinate their children as recommended by the American Academy of Pediatrics. Our approach works, but it’s time- and labor-intensive. Though we’re training medical professionals to bring these workshops across the state, it’s challenging to scale up quickly. After nearly four years of these efforts, I’ve learned that debunking misconceptions is a delicate art."

Tuesday, February 19, 2019

NATO Group Catfished Soldiers to Prove a Point About Privacy; Wired, February 18, 2019

Issie Lapowsky, Wired; NATO Group Catfished Soldiers to Prove a Point About Privacy

"For the military group that OK'd the research, the experiment effectively acted as a drill. But for the rest of us—and certainly for the social media platforms implicated in the report—the researchers hope it will serve as concrete evidence of why a fuzzy concept like privacy matters and what steps can be taken to protect it."

Some students, faculty remain uneasy about CMU's Army AI Task Force; The Pittsburgh Post-Gazette, February 18, 2019

Courtney Linder, The Pittsburgh Post-Gazette; Some students, faculty remain uneasy about CMU's Army AI Task Force

"Earlier this month, the Artificial Intelligence Task Force was introduced at the National Robotics Engineering Center. It’s meant as a hub for universities and private-industry partners to conduct research on AI in military applications.

While those on campus recognize CMU’s storied history with the U.S. Department of Defense — including contracting with the Defense Advanced Research Projects Agency (DARPA) on a regular basis and the hundreds of millions of defense dollars flowing into the university’s Software Engineering Institute — critics say they wish they had more information on this new work with the Army.

“We’re concerned that [the university] didn’t ask for any campus input or announce it,” said Wilson Ekern, a sophomore studying technical writing and German. “There’s a pretty big effort to get engineering and computer science students plugged into this military industrial complex.”

 His sentiments come at a time when Silicon Valley and the tech industry, at large, are toeing a gray line between creating useful innovations for defense and civilian protection and producing autonomous weapons with the potential to kill."

The Top Three Considerations For Designing Ethical AI; Forbes, February 19, 2019

Adam Rogers, Forbes; The Top Three Considerations For Designing Ethical AI

"Great Power, Even Greater Responsibility

AI has already drastically improved the lives of millions — paving the way for more accurate and affordable health care, improving food-production capacity and building fundamentally stronger organizations. This technology could very well be the most influential innovation in human history, but with major promise comes major potential pitfalls. As a society, we must proactively address transparency, ethical considerations and policy issues to ensure we’re applying AI to put people first and fundamentally make the world a better place."

The worst possible version of the EU Copyright Directive has sparked a German uprising; BoingBoing, February 18, 2019

Cory Doctorow, BoingBoing; The worst possible version of the EU Copyright Directive has sparked a German uprising

"In the meantime, the petition to save Europe from the Directive—already the largest in EU history—keeps racking up more signatures, and is on track to be the largest petition in the history of the world."

Drones and big data: the next frontier in the fight against wildlife extinction; The Guardian, February 18, 2019

, The Guardian; Drones and big data: the next frontier in the fight against wildlife extinction

"Yet it’s not more widely used because few researchers have the skills to use this type of technology. In biology, where many people are starting to use drones, few can code an algorithm specifically for their conservation or research problem, Wich says. “There’s a lot that needs to be done to bridge those two worlds and to make the AI more user-friendly so that people who can’t code can still use the technology.”

The solutions are more support from tech companies, better teaching in universities to help students overcome their fears of coding, and finding ways to link technologies together in an internet-of-things concept where all the different sensors, including GPS, drones, cameras and sensors, work together."

Sunday, February 17, 2019

With fitness trackers in the workplace, bosses can monitor your every step — and possibly more; The Washington Post, February 16, 2019

Christopher Rowland, The Washington Post; With fitness trackers in the workplace, bosses can monitor your every step — and possibly more



[Kip Currier: This article--and case study about the upshots and downsides of employers' use of personal health data harvested from their employees' wearable devices--is a veritable "ripped from the headlines" gift from the Gods for an Information Ethics professor's discussion question for students this week!... 
What are the ethics issues? 
Who are the stakeholders? 
What ethical theory/theories would you apply/not apply in your analysis and decision-making?
What are the risks and benefits presented by the issues and the technology? 
What are the potential positive and negative consequences?  
What are the relevant laws and gaps in law?
Would you decide to participate in a health data program, like the one examined in the article? Why or why not?

And for all of us...spread the word that HIPAA does NOT cover personal health information that employees VOLUNTARILY give to employers. It's ultimately your decision to decide what to do, but we all need to be aware of the pertinent facts, so we can make the most informed decisions.
See the full article and the excerpt below...]   


"Many consumers are under the mistaken belief that all health data they share is required by law to be kept private under a federal law called HIPAA, the Health Insurance Portability and Accountability Act. The law prohibits doctors, hospitals and insurance companies from disclosing personal health information.


But if an employee voluntarily gives health data to an employer or a company such as Fitbit or Apple — entities that are not covered by HIPPA’s [sic] rules — those restrictions on disclosure don’t apply, said Joe Jerome, a policy lawyer at the Center for Democracy & Technology, a nonprofit in Washington. The center is urging federal policymakers to tighten up the rules.

“There’s gaps everywhere,’’ Jerome said.

Real-time information from wearable devices is crunched together with information about past doctors visits and hospitalizations to get a health snapshot of employees...

Some companies also add information from outside the health system — social predictors of health such as credit scores and whether someone lives alone — to come up with individual risk forecasts."