Thursday, February 28, 2019

Michael Cohen just breached Trump’s GOP stone wall; The Washington Post, February 27, 2019

E.J. Dionne Jr., The Washington Post; Michael Cohen just breached Trump’s GOP stone wall

"Nothing Trump does should surprise us anymore, yet it was still shocking that the man who holds an office once associated with the words “leader of the free world” would refer to a murderous dictator as “my friend.” It’s clear by now that Trump feels closest to autocrats and is uneasy with truly democratic leaders, as Germany’s Chancellor Angela Merkel, among others, has learned.

The president’s apparatchiks also gave us an instructive hint as to what an unrestrained Trump might do to the free press. They excluded White House reporters Jonathan Lemire of the Associated Press and Jeff Mason of Reuters from the press pool covering the dinner between Trump and Kim for daring to ask inconvenient questions of our country’s elected leader. This wasn’t the work of Kim or Vietnam’s authoritarian government. It was the imperious action of a man who wishes he could live without the accountability that free government imposes...

Their fear that this might happen again is why House Republicans worked so hard to delegitimize Wednesday’s hearing. They and Trump would prefer Congress (and the media) to leave us in the dark. Fortunately, we do not live in North Korea."

Tuesday, February 26, 2019

New Research Study Describes DNDi As A “Commons” For Public Health; Intellectual Property Watch, February 25, 2019

David Branigan, Intellectual Property Watch; New Research Study Describes DNDi As A “Commons” For Public Health

"Since 2003, Drugs for Neglected Diseases Initiative (DNDi) has worked to meet the public health needs of neglected populations by filling gaps in drug development left by the for-profit pharmaceutical industry. A new research study by the French Development Agency analysed DNDi’s unique product development partnership (PDP) model, and found that it “illustrate[s] what can be presented as a ‘commons’ within the area of public health."

The research study, “DNDi, a Distinctive Illustration of Commons in the Area of Public Health,” was published earlier this month by the Agence Française de Développement (AFD), the French public development bank that “works in many sectors — energy, healthcare, biodiversity, water, digital technology, professional training, among others — to assist with transitions towards a safer, more equitable, and more sustainable world: a world in common,” according to its website."

When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019

Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?

"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...

It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models...

Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol... 

“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says [Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with it.”"

Fixing Tech’s Ethics Problem Starts in the Classroom; The Nation, February 21, 2019

Stephanie Wykstra, The Nation; Fixing Tech’s Ethics Problem Starts in the Classroom

 

"Casey Fiesler, a faculty member in the Department of Information Science at the University of Colorado Boulder, said that a common model in engineering programs is a stand-alone ethics class, often taught towards the end of a program. But there’s increasingly a consensus among those teaching tech ethics that a better model is to discuss ethical issues alongside technical work. Evan Peck, a computer scientist at Bucknell University, writes that separating ethical from technical material means that students get practice “debating ethical dilemmas…but don’t get to practice formalizing those values into code.” This is a particularly a problem, said Fiesler, if an ethics class is taught by someone from outside a student’s field, and the professors in their computer-science courses rarely mention ethical issues. On the other hand, classes focused squarely on the ethics of technology allow students to dig deeply into complicated questions. “I think the best solution is to do both…but if you can’t do both, incorporating [ethics material into regular coursework] is the best option,” Fiesler said."

 

Sunday, February 24, 2019

Pop Culture, AI And Ethics; Forbes, February 24, 2019

; Pop Culture, AI And Ethics

"In this article, I would like to take the opportunity to do a deep dive into three of the show’s episodes and offer a Design Thinking framework for how to adopt a thoughtful approach on AI implementations. Warning- there are spoilers!...

We need to continuously ask ourselves these 4 questions: How can humanity benefit from this AI/tech? What products and services can you imagine in this space? How might AI be manipulated, or unintended consequences lead to harmful outcomes? What are the suggestions for a responsible future?"

Saturday, February 23, 2019

China Uses DNA to Track Its People, With the Help of American Expertise; The New York Times, February 21, 2019

Sui-Lee Wee, The New York Times;

China Uses DNA to Track Its People, With the Help of American Expertise

The Chinese authorities turned to a Massachusetts company and a prominent Yale researcher as they built an enormous system of surveillance and control.

"Mr. Imin was one of millions of people caught up in a vast Chinese campaign of surveillance and oppression. To give it teeth, the Chinese authorities are collecting DNA — and they got unlikely corporate and academic help from the United States to do it."

Netflix Is the Most Intoxicating Portal to Planet Earth; The New York Times, February 22, 2019

Farhad Manjoo, The New York Times;

Netflix Is the Most Intoxicating Portal to Planet Earth

Instead of trying to sell American ideas to a foreign audience, it’s aiming to sell international ideas to a global audience.

"Netflix’s push abroad has not been without incident. Late last year, the company earned international condemnation for pulling an episode of “Patriot Act With Hasan Minhaj” from its service in Saudi Arabia. The comedian had criticized the Saudi crown prince, Mohammed bin Salman, after the C.I.A.’s conclusion that the prince had ordered the murder of Jamal Khashoggi, the dissident Saudi journalist.

Netflix argued that it had no choice but to obey the Saudi legal authority, which said the episode violated a statute, if it wanted to continue operating in that country. The company’s executives suggested that bringing the Saudis the rest of Netflix — every other episode of “Patriot Act” or shows that explore issues of gender and sexuality, like “Big Mouth” and “Sex Education” and “Nanette” — was better than having the entire service go dark in that country."

Thursday, February 21, 2019

How Do You Preserve History On The Moon?; NPR, February 21, 2019

Nell Greenfieldboyce, NPR; How Do You Preserve History On The Moon?

"Any nation can nominate a place within its sovereign territory to be included on the World Heritage List, she explains. The trouble with the moon is that, according to the 1967 Outer Space Treaty, no nation can claim sovereignty over anything in outer space.

This legal gray area is why Hanlon wants the U.N. space panel to issue some kind of declaration stating that the Apollo 11 landing site has unparalleled cultural importance that deserves special recognition.

The question is whether countries will be willing to agree on that kind of small step for preservation, or whether they'll balk at setting any precedent for putting part of the moon off-limits."

Wednesday, February 20, 2019

The Lab Discovering DNA in Old Books Artifacts have genetic material hidden inside, which can help scientists understand the past.; The Atlantic, February 19, 2019

Sarah Zhang, The Atlantic;

The Lab Discovering DNA in Old Books


"Artifacts have genetic material hidden inside, which can help scientists understand the past.

"But Collins isn’t just interested in human remains. He’s interested in the things these humans made; the animals they bred, slaughtered, and ate; and the economies they created.

That’s why he was studying DNA from the bones of livestock—and why his lab is now at the forefront of studying DNA from objects such as parchment, birch-bark tar, and beeswax. These objects can fill in gaps in the written record, revealing new aspects of historical production and trade. How much beeswax came from North Africa, for example? Or how did cattle plague make its way through Europe? With ample genetic data, you might reconstruct a more complete picture of life hundreds of years in the past."

How do you get anti-vaxxers to vaccinate their kids? Talk to them — for hours.; The Washington Post, February 19, 2019

Nadine Gartner, The Washington Post; How do you get anti-vaxxers to vaccinate their kids? Talk to them — for hours.

"My independent nonprofit, Boost Oregon, has found a way to reach these families by giving them an opportunity to learn about vaccines directly from medical professionals. The response has been overwhelmingly positive. In exit surveys, the vast majority of people who attend our workshops say they’ve decided to vaccinate their children as recommended by the American Academy of Pediatrics. Our approach works, but it’s time- and labor-intensive. Though we’re training medical professionals to bring these workshops across the state, it’s challenging to scale up quickly. After nearly four years of these efforts, I’ve learned that debunking misconceptions is a delicate art."

Tuesday, February 19, 2019

NATO Group Catfished Soldiers to Prove a Point About Privacy; Wired, February 18, 2019

Issie Lapowsky, Wired; NATO Group Catfished Soldiers to Prove a Point About Privacy

"For the military group that OK'd the research, the experiment effectively acted as a drill. But for the rest of us—and certainly for the social media platforms implicated in the report—the researchers hope it will serve as concrete evidence of why a fuzzy concept like privacy matters and what steps can be taken to protect it."

Some students, faculty remain uneasy about CMU's Army AI Task Force; The Pittsburgh Post-Gazette, February 18, 2019

Courtney Linder, The Pittsburgh Post-Gazette; Some students, faculty remain uneasy about CMU's Army AI Task Force

"Earlier this month, the Artificial Intelligence Task Force was introduced at the National Robotics Engineering Center. It’s meant as a hub for universities and private-industry partners to conduct research on AI in military applications.

While those on campus recognize CMU’s storied history with the U.S. Department of Defense — including contracting with the Defense Advanced Research Projects Agency (DARPA) on a regular basis and the hundreds of millions of defense dollars flowing into the university’s Software Engineering Institute — critics say they wish they had more information on this new work with the Army.

“We’re concerned that [the university] didn’t ask for any campus input or announce it,” said Wilson Ekern, a sophomore studying technical writing and German. “There’s a pretty big effort to get engineering and computer science students plugged into this military industrial complex.”

 His sentiments come at a time when Silicon Valley and the tech industry, at large, are toeing a gray line between creating useful innovations for defense and civilian protection and producing autonomous weapons with the potential to kill."

The Top Three Considerations For Designing Ethical AI; Forbes, February 19, 2019

Adam Rogers, Forbes; The Top Three Considerations For Designing Ethical AI

"Great Power, Even Greater Responsibility

AI has already drastically improved the lives of millions — paving the way for more accurate and affordable health care, improving food-production capacity and building fundamentally stronger organizations. This technology could very well be the most influential innovation in human history, but with major promise comes major potential pitfalls. As a society, we must proactively address transparency, ethical considerations and policy issues to ensure we’re applying AI to put people first and fundamentally make the world a better place."

The worst possible version of the EU Copyright Directive has sparked a German uprising; BoingBoing, February 18, 2019

Cory Doctorow, BoingBoing; The worst possible version of the EU Copyright Directive has sparked a German uprising

"In the meantime, the petition to save Europe from the Directive—already the largest in EU history—keeps racking up more signatures, and is on track to be the largest petition in the history of the world."

Drones and big data: the next frontier in the fight against wildlife extinction; The Guardian, February 18, 2019

, The Guardian; Drones and big data: the next frontier in the fight against wildlife extinction

"Yet it’s not more widely used because few researchers have the skills to use this type of technology. In biology, where many people are starting to use drones, few can code an algorithm specifically for their conservation or research problem, Wich says. “There’s a lot that needs to be done to bridge those two worlds and to make the AI more user-friendly so that people who can’t code can still use the technology.”

The solutions are more support from tech companies, better teaching in universities to help students overcome their fears of coding, and finding ways to link technologies together in an internet-of-things concept where all the different sensors, including GPS, drones, cameras and sensors, work together."

Sunday, February 17, 2019

With fitness trackers in the workplace, bosses can monitor your every step — and possibly more; The Washington Post, February 16, 2019

Christopher Rowland, The Washington Post; With fitness trackers in the workplace, bosses can monitor your every step — and possibly more



[Kip Currier: This article--and case study about the upshots and downsides of employers' use of personal health data harvested from their employees' wearable devices--is a veritable "ripped from the headlines" gift from the Gods for an Information Ethics professor's discussion question for students this week!... 
What are the ethics issues? 
Who are the stakeholders? 
What ethical theory/theories would you apply/not apply in your analysis and decision-making?
What are the risks and benefits presented by the issues and the technology? 
What are the potential positive and negative consequences?  
What are the relevant laws and gaps in law?
Would you decide to participate in a health data program, like the one examined in the article? Why or why not?

And for all of us...spread the word that HIPAA does NOT cover personal health information that employees VOLUNTARILY give to employers. It's ultimately your decision to decide what to do, but we all need to be aware of the pertinent facts, so we can make the most informed decisions.
See the full article and the excerpt below...]   


"Many consumers are under the mistaken belief that all health data they share is required by law to be kept private under a federal law called HIPAA, the Health Insurance Portability and Accountability Act. The law prohibits doctors, hospitals and insurance companies from disclosing personal health information.


But if an employee voluntarily gives health data to an employer or a company such as Fitbit or Apple — entities that are not covered by HIPPA’s [sic] rules — those restrictions on disclosure don’t apply, said Joe Jerome, a policy lawyer at the Center for Democracy & Technology, a nonprofit in Washington. The center is urging federal policymakers to tighten up the rules.

“There’s gaps everywhere,’’ Jerome said.

Real-time information from wearable devices is crunched together with information about past doctors visits and hospitalizations to get a health snapshot of employees...

Some companies also add information from outside the health system — social predictors of health such as credit scores and whether someone lives alone — to come up with individual risk forecasts."

Roger McNamee: ‘It’s bigger than Facebook. This is a problem with the entire industry'; The Observer via The Guardian, February 16, 2019

Alex Hern, The Observer via The Guardian; Roger McNamee: ‘It’s bigger than Facebook. This is a problem with the entire industry'

"Mark Zuckerberg’s mentor and an early investor in Facebook on why his book Zucked urges people to turn away from big tech’s toxic business model

Roger McNamee is an American fund manager and venture capitalist who has made investments in, among others, Electronic Arts, Sybase, Palm Inc and Facebook. In 2004, along with Bono and others, he co-founded Elevation Partners, a private equity firm. He has recently published Zucked: Waking Up to the Facebook Catastrophe...

Is this a Facebook problem or a Mark Zuckerberg problem?
 

It’s bigger than Facebook. This is a problem with the entire internet platform industry, and Mark is just one of the two most successful practitioners of it.

This is a cultural model that infected Silicon Valley around 2003 – so, exactly at the time that Facebook and LinkedIn were being started – and it comes from a specific route.

Silicon Valley spent the period from 1950 to 2003 first with the space programme, and then with personal computers and the internet. The cultures of those things were very idealistic: make the world a better place through technology. Empower the people who use technology to be their best selves. Steve Jobs famously characterised his computers as bicycles for the mind.

The problem with Google and Facebook is that their goal is to replace humans in many of the core activities of life...

Do you think there’s a version of history in which we don’t end up in this situation? 

The culture into which Facebook was born was this deeply libertarian philosophy that was espoused by their first investor, Peter Thiel, and the other members of the so-called “PayPal mafia”.

They were almost single-handedly responsible for creating the social generation of companies. And their insights were brilliant. Their ideas about how to grow companies were revolutionary and extraordinarily successful. The challenge was that they also had a very different philosophy from the prior generations of Silicon Valley. Their notion was that disruption was perfectly reasonable because you weren’t actually responsible for anybody but yourself, so you weren’t responsible for the consequences of your actions.

That philosophy got baked into their companies in this idea that you could have a goal – in Facebook’s case, connecting the whole world on one network – and that goal would be so important that it justified whatever means were necessary to get there."

Saturday, February 16, 2019

Vatican, Microsoft team up on artificial intelligence ethics; The Washington Post, February 13, 2019

Associated Press via The Washington Post; Vatican, Microsoft team up on artificial intelligence ethics

"The Vatican says it is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence.

Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life.

The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence."

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

What to tell patients when artificial intelligence is part of the care team; American Medical Association (AMA), February 13, 2019

Staff News Writer, American Medical Association (AMA); What to tell patients when artificial intelligence is part of the care team


"Artificial intelligence (AI) in health care can help manage and analyze data, make decisions and conduct conversations. The availability of AI is destined to drastically change physicians’ roles and everyday practices. It is key that physicians be able to adapt to changes in diagnostics, therapeutics and practices of maintaining patient safety and privacy. However, physicians need to be aware of ethically complex questions about implementation, uses and limitations of AI in health care.   

The February issue of the AMA Journal of Ethics® (@JournalofEthics) features numerous perspectives on AI in health care and gives you an opportunity to earn CME credit."

Wednesday, February 13, 2019

AI ethics: Time to move beyond a list of principles; Information Age, February 13, 2019

Nick Ismail, Information Age; AI ethics: Time to move beyond a list of principles

"AI ethics should be a universally accepted practice.

AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws, and we must tread carefully with the integration of AI into weaponry — all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?...

Indeed, ‘an ethical approach to the development and deployment of algorithms, data and AI (ADA) requires clarity and consensus on ethical concepts and resolution of tensions between values,’ according to a new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Organisations and governments need help, and this report provides a broad roadmap for work on the ethical and societal implications of ADA-based technologies."

Defying Parents, A Teen Decides To Get Vaccinated; NPR, February 9, 2019

Amanda Morris and Scott Simon, NPR; Defying Parents, A Teen Decides To Get Vaccinated

"Ethan Lindenberger is getting vaccinated for well, just about everything.

He's 18 years old, but had never received vaccines for diseases like hepatitis, polio, measles, mumps, rubella, or the chickenpox.

Lindenberger's mother, Jill Wheeler, is anti-vaccine. He said she has been influenced by online misinformation, such as a debunked study that claimed certain vaccines were linked with autism, or a theory that vaccines cause brain damage. Incorrect ideas like these have spread like wildfire, so much so that the CDC has explicitly tried to combat them, posting pages like "Vaccines Do Not Cause Autism.""

Facebook under pressure to halt rise of anti-vaccination groups; The Guardian, February 12, 2019

Ed Pilkington and Jessica Glenza, The Guardian; Facebook under pressure to halt rise of anti-vaccination groups

"Dr Noni MacDonald, a professor of pediatrics at Dalhousie University in Halifax, Nova Scotia, Canada, who has worked as an expert adviser to the WHO on immunization, questioned why Facebook was unrestrained by the stringent controls against misinformation put on drug companies. “We don’t let big pharma or big food or big radio companies do this, so why should we let this happen in this venue?”

She added: “When a drug company puts a drug up in the formal media, they can’t tell you something false or they will be sued. So why is this different? Why is this allowed?”"

Tuesday, February 12, 2019

A.I. Shows Promise Assisting Physicians; The New York Times, February 11, 2019

Cade Metz, The New York Times; A.I. Shows Promise Assisting Physicians

"Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.

Now a group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence.

In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data."

Rethinking Medical Ethics; Forbes, February 11, 2019

, Forbes; Rethinking Medical Ethics

"Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision—and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI “black box”? How do they make sure AI systems avoid bias and protect patient privacy?

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges."

‘Sorrow Is the Price You Pay for Love’; The Atlantic, February 5, 2019

Video by Erlend Eirik Mo, The Atlantic;

‘Sorrow Is the Price You Pay for Love’


[Kip Currier: A remarkable short video. Poignant, uplifting, inspiring. A reminder of what matters most, and what's worth striving for and toward.

Watch and share with others.]

"“So much in her story was compelling for me,” Mo told The Atlantic. “It is unique, about a girl doing a male macho dance, and universal, about love and sorrow.”"

EU Recalls Children’s Smartwatch Over Security Concerns; Lexology, February 8, 2019

Hunton Andrews Kurth LLP , Lexology; EU Recalls Children’s Smartwatch Over Security Concerns

"The European Commission has issued an EU-wide recall of the Safe-KID-One children’s smartwatch marketed by ENOX Group over concerns that the device leaves data such as location history, phone and serial numbers vulnerable to hacking and alteration."

Monday, February 11, 2019

A Confederacy of Grift The subjects of Robert Mueller’s investigation are cashing in.; The Atlantic, February 10, 2019

Quinta Jurecic; A Confederacy of Grift:

"For people in the greater Trump orbit, the publicity of a legal clash with Robert Mueller provides a chance to tap into the thriving marketplace of fringe pro-Trump media. Disinformation in America is a business. And the profit to be turned from that business is a warning sign that the alternative stories of the Mueller investigation spun by the president’s supporters will have a long shelf life."

Monday, February 4, 2019

How the Nazis Used the Rule of Law Against Jewish Lawyers; The Daily Beast, February 1, 2019


How the Nazis Used the Rule of Law Against Jewish Lawyers

A new book on the persecution of Jewish lawyers under the Third Reich ably documents a dark history—but fails to acknowledge the complicity of the law.

"Released in English for the first time by the American Bar Association (ABA), Lawyers Without Rights is a powerful work of history, commemorating Berlin’s Jewish attorneys while also describing how they were barred from their profession and, in most cases, driven from their city. Unfortunately, the tragedy of Lawyers Without Rights is not confined to history but permeates the ongoing idealization of “the rule of law.”"

Let Children Get Bored Again; The New York Times, February 2, 2019

Pamela Paul, The New York Times;

Let Children Get Bored Again

Boredom teaches us that life isn’t a parade of amusements. More important, it spawns creativity and self-sufficiency.

"Kids won’t listen to long lectures, goes the argument, so it’s on us to serve up learning in easier-to-swallow portions.

But surely teaching children to endure boredom rather than ratcheting up the entertainment will prepare them for a more realistic future, one that doesn’t raise false expectations of what work or life itself actually entails. One day, even in a job they otherwise love, our kids may have to spend an entire day answering Friday’s leftover email. They may have to check spreadsheets. Or assist robots at a vast internet-ready warehouse.

This sounds boring, you might conclude. It sounds like work, and it sounds like life. Perhaps we should get used to it again, and use it to our benefit. Perhaps in an incessant, up-the-ante world, we could do with a little less excitement."