Tuesday, April 9, 2019

Real or artificial? Tech titans declare AI ethics concerns; AP, April 7, 2019

Matt O'Brien and Rachel Lerman, AP; Real or artificial? Tech titans declare AI ethics concerns

"The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side.

But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.

“Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?"

Monday, April 8, 2019

AI systems should be accountable, explainable, and unbiased, says EU; The Verge, April 8, 2019

James Vincent, The Verge; AI systems should be accountable, explainable, and unbiased, says EU

"The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology."

Expert Panel: What even IS 'tech ethics'?; TechCrunch, April 2, 2019

Greg Epstein, TechCrunch; Expert Panel: What even IS 'tech ethics'?

"It’s been a pleasure, this past month, to launch a weekly series investigating issues in tech ethics, here at TechCrunch. As discussions around my first few pieces have taken off, I’ve noticed one question recurring in a number of different ways: what even IS “tech ethics”? I believe there’s lots of room for debate about what this growing field entails, and I hope that remains the case because we’re going to need multiple ethical perspectives on technologies that are changing billions of lives. That said, we need to at least attempt to define what we’re talking about, in order to have clearer public conversations about the ethics of technology."

Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

"No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

Circumcision, patient trackers and torture: my job in medical ethics; The Guardian, April 8, 2019

Julian Sheather, The Guardian; Circumcision, patient trackers and torture: my job in medical ethics

"Monday

Modern healthcare is full of ethical problems. Some are intensely practical, such as whether we can withdraw a feeding tube from a patient in a vegetative state who could go on living for many years, or whether a GP should give a police officer access to patient records following a local rape. 

Others are more speculative and future-oriented: will robots become carers, and would that be a bad thing? And then there are the political questions, like whether the Home Office should have access to patient records. My job is to advise the British Medical Association on how we navigate these issues and make sure the theory works in practice for patients and healthcare professionals."

Sunday, April 7, 2019

Hey Google, sorry you lost your ethics council, so we made one for you; MIT Technology Review, April 6, 2019

Bobbie Johnson and Gideon Lichfield, MIT Technology Review; Hey Google, sorry you lost your ethics council, so we made one for you

"Well, that didn’t take long. After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC—a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.

How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome."

Thursday, April 4, 2019

Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.; Forbes, April 1, 2019

Steven Salzberg, Forbes; Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.

"What Haug doesn't mention here is that there is one reason (and only one, I would argue) that NEJM makes all of its articles freely available after some time has passed: the NIH requires it. This dates back to 2009, when Congress passed a law, after intense pressure from citizens who were demanding access to the research results that they'd paid for, requiring all NIH-funded results to be deposited in a free, public repository (now called PubMed Central) within 12 months of publication.

Scientific publishers fought furiously against this policy. I know, because I was there, and I talked to many people involved in the fight at the time. The open-access advocates (mostly patient groups) wanted articles to be made freely available immediately, and they worked out a compromise where the journals could have 6 months of exclusivity. At the last minute, the NIH Director at the time, Elias Zerhouni, extended this to 12 months, for reasons that remain shrouded in secrecy, but thankfully, the public (and science) won the main battle. For NEJM to turn around now and boast that they are releasing articles after an embargo period, without mentioning this requirement, is hypocritical, to say the least. Believe me, if the NIH requirement disappeared (and publishers are still lobbying to get rid of it!), NEJM would happily go back to keeping all access restricted to subscribers.

The battle is far from over. Open access advocates still want to see research released immediately, not after a 6-month or 12-month embargo, and that's precisely what the European Plan S will do."

THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

James Vincent, The Verge; 

THE PROBLEM WITH AI ETHICS

Is Big Tech’s embrace of AI ethics boards actually helping anyone?


"Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

Google’s brand-new AI ethics board is already falling apart; Vox, April 3, 2019

Kelsey Piper, Vox; Google’s brand-new AI ethics board is already falling apart

"Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

Altogether, it’s not the most promising start for the board.

The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one."

Monday, April 1, 2019

Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret; Forbes, March 27, 2019

Sam Shead, Forbes; Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret

"Google announced a new external advisory council to keep its artificial intelligence developments in check on Wednesday, but the mysterious AI ethics board that was set up when the company bought the DeepMind AI lab in 2014 remains shrouded in mystery.

The new advisory council consists of eight members that span academia and public policy. 

"We've established an Advanced Technology External Advisory Council (ATEAC)," wrote Kent Walker SVP of global affairs at Google in a blog post on Tuesday. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." 

Here is the full list of AI advisory council members:"

Friday, March 29, 2019

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley; The Guardian, March 29, 2019

Sam Levin, The Guardian; 'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

"“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”

Universities and ethics boards could play a vital role in counteracting these trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.”

It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.

The Partnership on AI, an ethics-focused industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountability, transparency, and ethics in AI” also excludes black voices."
 

Apple Martin tells off mother Gwyneth Paltrow for sharing photo without consent ; The Guardian, March 28, 2019

Kate Lyons, The Guardian;

Apple Martin tells off mother Gwyneth Paltrow for sharing photo without consent

 

"Paltrow posted a photo to Instagram earlier in the week of herself with Apple Martin, her 14-year-old daughter with Coldplay singer Chris Martin, at a ski field. Apple’s face is largely covered by ski goggles.

Apple commented on the post: “Mom we have discussed this. You may not post anything without my consent.”

Paltrow replied: “You can’t even see your face!”

Apple’s comment, which was later deleted, sparked debate about how much parents should share about their children’s lives on social media."

 

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers); Quartz, March 27, 2019

Olivia Goldhill, Quartz;

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers)

 

"For all their pontificating and complex moral theories, ethicists are just as disappointingly flawed as the rest of humanity. A study of 417 professors published last week in Philosophical Psychology found that, though the 151 ethics professors expressed stricter moral views, they were no better at behaving ethically."

With Vaccine Misinformation, Libraries Walk a Fine Line; Undark, March 22, 2019

Jane Roberts, Undark; 


As vanguards of intellectual freedom, public libraries face difficult questions regarding what vaccine materials to make available. How to decide?

"The decision on what to make available to library patrons — and what not to — would seem perilous territory for America’s foundational repositories of ideas, though debates over library collections are not new. Still, in an era beset by “fake news” and other artifacts of the disinformation age, libraries (and librarians) may once again find themselves facing difficult choices. One of the core values of librarianship, said Andrea Jamison, a lecturer in library science at Valparaiso University in Indiana, is upholding the principles of intellectual freedom — which include challenging censorship. “We do want to make sure we are presenting information that is accurate,” Jamison said. “But then the question becomes, who becomes the determining factor?”"

Wednesday, March 20, 2019

The New Zealand Terror Attack Shows Our Ethics Lagging Way Behind Our Technology; Forbes, March 19, 2019

Todd Essig, Forbes;

The New Zealand Terror Attack Shows Our Ethics Lagging Way Behind Our Technology


"We are failing. Collectively. Some more than others. When white nationalist terrorism struck New Zealand, after similar strikes in Norway, Pittsburgh and Charleston, it showed how we are failing to meet a central challenge posed by our technologically hyper-connected world. Namely, the cultural consequences of rapidly advancing technology require an equally accelerated and psychologically-informed life-long ethical education. The more things change, well, the more things have to change. We all have to do better.

Hate speech takes root and sprouts violence in the fertile ground of, as Christian Picciolini describes in White American Youth: My Descent into America's Most Violent Hate Movement--and How I Got Out, someone searching for identity, community, and purpose. Simply put, the developed world is failing to provide good-enough experiences of “identity, community, and purpose" suitable for 21st-century techno-culture. 

The old ways for learning how to be a good, decent person no longer work, or don’t work well enough for enough people. Of course it's an incredibly complex issue. But one piece is that people are now paradoxically isolated at their screens at the same time they are globally connected everywhere with anyone they choose. This paradox creates a feeling of community but without the responsibilities of community. The complexity and consequence of being fully with another person is diminished. Opportunities for empathy shrink to a vanishing point. But empathy creates the friction we need to slow and maybe even stop hate. So hate grows."

Tuesday, March 19, 2019

Ethics, Computing, and AI: Perspectives from MIT; MIT News, March 18, 2019

MIT News;

Ethics, Computing, and AI: Perspectives from MIT

Faculty representing all five MIT schools offer views on the ethical and societal implications of new technologies.

"The MIT Stephen A. Schwarzman College of Computing will reorient the Institute to bring the power of computing and AI to all fields at MIT; allow the future of computing and AI to be shaped by all MIT disciplines; and advance research and education in ethics and public policy to help ensure that new technologies benefit the greater good.

To support ongoing planning for the new college, Dean Melissa Nobles invited faculty from all five MIT schools to offer perspectives on the societal and ethical dimensions of emerging technologies. This series presents the resulting commentaries — practical, inspiring, concerned, and clear-eyed views from an optimistic community deeply engaged with issues that are among the most consequential of our time. 

The commentaries represent diverse branches of knowledge, but they sound some common themes, including: the vision of an MIT culture in which all of us are equipped and encouraged to discern the impact and ethical implications of our endeavors."

Educators Urge Parents And High Schools To Make Ethics The Heart Of College Applications; WBUR, On Point, March 18, 2019

WBUR, On Point;

Educators Urge Parents And High Schools To Make Ethics The Heart Of College Applications

 

"A new report is calling on parents and high schools to put ethical character at the center of college admissions.

The report, though long planned, comes out as the country is still reeling from revelations that wealthy parents bribed standardized test administrators, college coaches and at least one former college trustee to admit students who might not otherwise have been qualified...

The authors make several recommendations to parents:
  1. Keep the focus on your teen. "It's critical for parents to disentangle their own wishes from their teen's wishes," the authors write.
  2. Follow your ethical GPS. The authors advise parents not to let their own voice intrude in college essays, and to not look the other way when hired tutors are over-involved in applications.
  3. Use the admissions process as an opportunity for ethical education.
  4. Be authentic. The authors recommend not sending conflicting messages to their children about what kind of college they should try to get into.
  5. Help your teen contribute to others in meaningful ways. "Service trips to distant countries or launching a new service project are ... not what matters to admissions deans," the authors say. They recommend parents focus on their children's authentic interests instead.
  6. Advocate for elevating ethical character and reducing achievement-related distress.
  7. Model and encourage gratitude."

Facebook's privacy meltdown after Cambridge Analytica is far from over; The Guardian, March 18, 2019

Siva Vaidhyanathan, The Guardian; Facebook's privacy meltdown after Cambridge Analytica is far from over

"Facebook might not be run by Bond villains. But it’s run by people who have little knowledge of or concern for democracy or the dignity of the company’s 2.3 billion users.

The privacy meltdown story should be about how one wealthy and powerful company gave our data without our permission to hundreds of companies with no transparency, oversight, or even concern about abuse. Fortunately, the story does not end with Cambridge Analytica. The United States government revealed on Wednesday that it had opened a criminal investigation into Facebook over just these practices."

Myspace loses all content uploaded before 2016; The Guardian, March 18, 2019

Alex Hern, The Guardian; Myspace loses all content uploaded before 2016 

Faulty server migration blamed for mass deletion of songs, photos and video

"Myspace, the once mighty social network, has lost every single piece of content uploaded to its site before 2016, including millions of songs, photos and videos with no other home on the internet.
 
The company is blaming a faulty server migration for the mass deletion, which appears to have happened more than a year ago, when the first reports appeared of users unable to access older content. The company has confirmed to online archivists that music has been lost permanently, dashing hopes that a backup could be used to permanently protect the collection for future generations...

Some have questioned how the embattled company, which was purchased by Time Inc in 2016, could make such a blunder."

Saturday, March 16, 2019

'I can get any novel I want in 30 seconds': can book piracy be stopped?; The Guardian, March 6, 2019

Katy Guest, The Guardian;

'I can get any novel I want in 30 seconds': can book piracy be stopped?


"The UK government’s Intellectual Property Office estimates that 17% of ebooks are consumed illegally. Generally, pirates tend to be from better-off socioeconomic groups, and aged between 30 and 60. Many use social media to ask for tips when their regular piracy website is shut down; when I contacted some, those who responded always justified it by claiming they were too poor to buy books – then tell me they read them on their e-readers, smartphones or computer screens - or that their areas lacked libraries, or they found it hard to locate books in the countries where they lived. Some felt embarrassed. Others blamed greedy authors for trying to stop them.

When we asked Guardian readers to tell us about their experiences with piracy, we had more than 130 responses from readers aged between 20 and 70. Most regularly downloaded books illegally and while some felt guilty – more than one said they only pirated “big names” and when “the author isn’t on the breadline, think Lee Child” – the majority saw nothing wrong in the practice. “Reading an author’s work is a greater compliment than ignoring it,” said one, while others claimed it was part of a greater ethos of equality, that “culture should be free to all”."

The Marines don’t want you to see what happens when propaganda stops and combat begins; The Washngton Post, March 15, 2019

Alex Horton, The Washington Post; The Marines don’t want you to see what happens when propaganda stops and combat begins

"Lagoze found himself in a murky gray area of free speech and fair-use government products. U.S. citizens can already go on Pentagon-operated sites and download free military photos and video.Their tax dollars fund it, and federal government creations are not protected by copyright.

So could Lagoze take the moments he filmed with government resources and make something new?

He worked with the Knight First Amendment Institute at Columbia University to push back against the military’s claims of impropriety. The Marine Corps relented this month."

The costs of failing to immunize children are staggering. Just ask one young boy in Oregon.; The Washington Post, March 15, 2019


Editorial Board, The Washington Post;

"The argument that states should permit only narrowly defined religious objections to vaccination hinges on the idea of herd immunity, which prevents contagious diseases from spreading if a high enough proportion of a community is vaccinated. This is why vaccination requirements are linked to a child’s ability to attend school. Parents’ right to choose what happens to their own children is outweighed by the state’s interest in protecting all children.

Tetanus, while not itself infectious, is included in vaccination requirements because the shot immunizing against it also protects recipients from whooping cough and diphtheria. But the Oregon case is a reminder that the vaccination controversy is not only about whether parents have a right to endanger other people’s children. It is also about whether they have a right to endanger their own. The six-figure cost in Oregon is startling enough. The cost to the 6-year-old boy, who could barely walk when he was transferred out of the ICU, is tremendous...

Parents have substantial leeway to weigh risk as they see fit — but when does a parent’s right to be irresponsible run up against a child’s right not to contract a life-threatening illness?"

Department of Defense discusses the ethics of AI use at Carnegie Mellon; Pittsburgh Business Times, March 15, 2019

, Pittsburgh Business Times;

Department of Defense discusses the ethics of AI use at Carnegie Mellon



"As artificial intelligence looms closer and closer to inevitable integration into nearly every aspect of national security, the U.S. Department of Defense tasked the Defense Innovation Board with drafting a set of guiding principles for the ethical use of AI in such cases. 

That DIB wants to know what the public thinks.

The DIB’s subcommittee on science and technology hosted a public listening session Thursday at Carnegie Mellon University focused on “The Ethical and Responsible Use of Artificial Intelligence for the Department of Defense.” 

It’s one of three DIB listening sessions scheduled for across the U.S. to collect public thoughts and concerns. Using the ideas collected, the DIB will put together its guidelines in the coming months and announce a full recommendation for the DoD later this year."

Friday, March 15, 2019

Review: 'The Inventor' is a coolly appalling portrait of Elizabeth Holmes and the Theranos scandal; The Los Angeles Times, March 14, 2019

Justin Chang, The Los Angeles Times;

Review: 'The Inventor' is a coolly appalling portrait of Elizabeth Holmes and the Theranos scandal


"As a quick glance at this week’s headlines will remind you — a staggering college admissions scandal, a wave of indictments in the cases of Paul Manafort and Jussie Smollett — we are living in deeply fraudulent times. But if there are few people or institutions worthy of our trust anymore, perhaps we can still trust that, eventually, Alex Gibney will get around to making sense of it all. Over the course of his unflagging, indispensable career he has churned out documentaries on Scientology and Enron, Lance Armstrong and Casino Jack — individual case studies in a rich and fascinating investigation of the American hustler at work.
 
Gibney approaches his subjects with the air of an appalled moralist and, increasingly, a grudging connoisseur. His clean, straightforward style, which usually combines smart talking heads, slick graphics and reams of meticulous data, is clearly galvanized by these charismatic individuals, who are pathological in their dishonesty and riveting in their chutzpah. And he is equally fascinated by the reactions, ranging from unquestioning belief to conflicted loyalty, that they foster among their followers and associates, who in many cases shielded them, at least for a while, from public discovery and censure.
 
“The Inventor: Out for Blood in Silicon Valley,” Gibney’s latest exercise in coolly measured outrage, is an engrossing companion piece to his other works in this vein. The subject of this HBO documentary is Elizabeth Holmes, the self-styled biotech visionary who dropped out of Stanford at age 19 and founded a company called Theranos, which promised to bring about a revolution in preventive medicine and personal healthcare. Its top-secret weapon was a compact machine called the Edison, which could purportedly run more than 200 individual tests from just a few drops of blood, obtained with just a prick of the finger.
 
Holmes’ vision of a brave new world — one in which anyone could stop by Walgreens and obtain a comprehensive, potentially life-saving snapshot of their health — proved tantalizing enough to raise more than $400 million and earned her a reputation as possibly the greatest inventor since, well, Thomas Edison. Her investors included Betsy DeVos, Rupert Murdoch and the Waltons; Henry Kissinger, George Shultz and James Mattis sat on her board of directors. But that was all before the Wall Street Journal’s John Carreyrou and other investigative journalists exposed glaring faults in the Edison’s design and sent the company’s $10-billion valuation spiraling down to nothing. Theranos dissolved in 2018, and Holmes and former company president Sunny Balwani were charged with conspiracy and fraud.
 
Full disclosure: As the son of a retired medical technologist who spent more than 30 years testing blood the traditional way, I approached “The Inventor” with great fascination and more than a little schadenfreude. The movie, for its part, seems both magnetized and repelled by its subject, a reaction that it will likely share with its audience. Gibney is perhaps overly fond of deploying intense, lingering close-ups of Holmes’ face and peering deep into her unnerving blue eyes (“She didn’t blink,” a former employee recalls). If the eyes are the windows to the soul, “The Inventor” just keeps looking and looking, as though uncertain whether or not its subject has one."