Friday, April 12, 2019

The open access research model is hurting academics in poorer countries; Quartz, April 12, 2019

Brenda Wingfield, University of Pretoria & Bob Millar, University of Pretoria, Quartz; The open access research model is hurting academics in poorer countries

"There is however, little focus on the costs of open access to researchers in the developing world. Most people we have spoken to inside academia are under the impression that these costs are waived. But that’s only the case for some journals in 47 of the world’s “least developed” nations; researchers in the 58 other countries in the developing world must pay the full price...

The cost of a PlosOne article is 20% of the cost of a Masters student’s scholarship. So the choice is “do I give a Masters student a scholarship, or publish more in open access journals?” We are trying to do both and we are sure that’s the approach many research programs are trying to take. But as more journals take the open access route this is going to be more difficult. In future, if we want to publish more articles in open access journals, we will have to reduce the number of Masters, Doctoral and post doctoral students in our programs."

Thursday, April 11, 2019

How The Times Thinks About Privacy; The New York Times, April 10, 2019

A.G. Sulzberger, The New York Times; How The Times Thinks About Privacy

We’re examining our policies and practices around data, too. 

"The Times is committed to continue taking steps to increase transparency and protections. And our journalists will do their part to ensure that the public and policymakers are fully informed by covering these issues aggressively, fairly and accurately. Over the coming months, The Privacy Project will feature reporters investigating how digital privacy is being compromised, Op-Ed editors bringing in outside voices to help foster debate and contextualize trade-offs, and opinion writers calling for solutions. All of us at The Times will be reading closely as well, using their findings to help inform the continuing evolution of our own policies and practices."

Do You Know What You’ve Given Up?; The New York Times, April 10, 2019

James Bennet, The New York Times; Do You Know What You’ve Given Up?

""It seems like a good moment to pause and consider the choices we’ve already made, and the ones that lie ahead. That’s why Times Opinion is launching The Privacy Project, a monthslong initiative to explore the technology, to envision where it’s taking us, and to convene debate about how we should control it to best realize, rather than stunt or distort, human potential."

It's Time to Panic About Privacy; The New York Times, April 10, 2019

Farhad Manjoo, The New York Times; It's Time to Panic About Privacy

"Here is the stark truth: We in the West are building a surveillance state no less totalitarian than the one the Chinese government is rigging up.

But while China is doing it through government...we are doing it through corporations and consumer products, in the absence of any real regulation that recognizes the stakes at hand.

It is time start caring about the mess of digital privacy. In fact it's time to panic."

Nobel laureate takes stance against allowing research to be intellectual property; The Auburn Plainsman, April 11, 2019

Trice Brown, The Auburn Plainsman; Nobel laureate takes stance against allowing research to be intellectual property

"George Smith, recipient of a 2018 Nobel Prize for Chemistry, spoke to a crowd of students and faculty about the problems that arise from making publicly funded research intellectual property.

Smith said one of the greatest problems facing the scientific research community is the ability of universities to claim intellectual property rights on publicly funded research.

“I think that all research ought not to have intellectual — not to be intellectual property,” Smith said. “It’s the property of everyone.”"

Wednesday, April 10, 2019

A board to oversee Georgia journalists sounds like Orwellian fiction. The proposal is all too real.; The Washington Post, April 8, 2019

Margaret Sullivan, The Washington Post; A board to oversee Georgia journalists sounds like Orwellian fiction. The proposal is all too real.

"Granted, journalists are far from perfect, and their practices deserve to be held to reasonable standards. But there already is pretty good agreement about journalistic ethics, available for all to see.

Respectable news organizations have codes of ethics — many of them available to the public. The Society of Professional Journalists has a well-accepted code as well."

Tuesday, April 9, 2019

Why we can’t leave Grindr under Chinese control; The Washington Post, April 9, 2019

Isaac Stone Fish, The Washington Post; Why we can’t leave Grindr under Chinese control

"Because a Chinese company now oversees Grindr’s data, photographs and messages, that means the [Chinese Communist] Party can, if it chooses to do so, access all of that information, regardless of where it’s stored. And that data includes compromising photos and messages from some of America’s most powerful men — some openly gay, and some closeted.

Couple this with China’s progress in developing big data and facial recognition software, industries more advanced there than in the United States, and there are some concerning national security implications of a Chinese-owned Grindr. In other words, Beijing could now exploit compromising photos of millions of Americans. Think what a creative team of Chinese security forces could do with its access to Grindr’s data."

Pride and profit: Why Mayan weavers fight for intellectual property rights; The Christian Science Monitor, March 27, 2019

, The Christian Science Monitor;

Pride and profit: Why Mayan weavers fight for intellectual property rights

Why We Wrote This

Who owns culture, if anyone? It’s a complicated question that can seem almost theoretical. But its real-life consequences are keenly felt by many traditional artisans.

"Dr. Little fears that looking at textile design through the lens of fashion essentially “freezes it in time as a kind of folk art or folk material and that doesn’t allow it to actually live.”

“I think of [weaving] like a language,” he adds. Among indigenous communities, “it’s more vibrant when everyone is using it, fooling around with it, taking from others, and making new combinations. Vibrancy in language indicates strength, and in textiles it’s the same way.”"

Real or artificial? Tech titans declare AI ethics concerns; AP, April 7, 2019

Matt O'Brien and Rachel Lerman, AP; Real or artificial? Tech titans declare AI ethics concerns

"The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side.

But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.

“Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?"

Monday, April 8, 2019

AI systems should be accountable, explainable, and unbiased, says EU; The Verge, April 8, 2019

James Vincent, The Verge; AI systems should be accountable, explainable, and unbiased, says EU

"The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology."

Expert Panel: What even IS 'tech ethics'?; TechCrunch, April 2, 2019

Greg Epstein, TechCrunch; Expert Panel: What even IS 'tech ethics'?

"It’s been a pleasure, this past month, to launch a weekly series investigating issues in tech ethics, here at TechCrunch. As discussions around my first few pieces have taken off, I’ve noticed one question recurring in a number of different ways: what even IS “tech ethics”? I believe there’s lots of room for debate about what this growing field entails, and I hope that remains the case because we’re going to need multiple ethical perspectives on technologies that are changing billions of lives. That said, we need to at least attempt to define what we’re talking about, in order to have clearer public conversations about the ethics of technology."

Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

"No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

Circumcision, patient trackers and torture: my job in medical ethics; The Guardian, April 8, 2019

Julian Sheather, The Guardian; Circumcision, patient trackers and torture: my job in medical ethics

"Monday

Modern healthcare is full of ethical problems. Some are intensely practical, such as whether we can withdraw a feeding tube from a patient in a vegetative state who could go on living for many years, or whether a GP should give a police officer access to patient records following a local rape. 

Others are more speculative and future-oriented: will robots become carers, and would that be a bad thing? And then there are the political questions, like whether the Home Office should have access to patient records. My job is to advise the British Medical Association on how we navigate these issues and make sure the theory works in practice for patients and healthcare professionals."

Sunday, April 7, 2019

Hey Google, sorry you lost your ethics council, so we made one for you; MIT Technology Review, April 6, 2019

Bobbie Johnson and Gideon Lichfield, MIT Technology Review; Hey Google, sorry you lost your ethics council, so we made one for you

"Well, that didn’t take long. After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC—a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.

How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome."

Thursday, April 4, 2019

Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.; Forbes, April 1, 2019

Steven Salzberg, Forbes; Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.

"What Haug doesn't mention here is that there is one reason (and only one, I would argue) that NEJM makes all of its articles freely available after some time has passed: the NIH requires it. This dates back to 2009, when Congress passed a law, after intense pressure from citizens who were demanding access to the research results that they'd paid for, requiring all NIH-funded results to be deposited in a free, public repository (now called PubMed Central) within 12 months of publication.

Scientific publishers fought furiously against this policy. I know, because I was there, and I talked to many people involved in the fight at the time. The open-access advocates (mostly patient groups) wanted articles to be made freely available immediately, and they worked out a compromise where the journals could have 6 months of exclusivity. At the last minute, the NIH Director at the time, Elias Zerhouni, extended this to 12 months, for reasons that remain shrouded in secrecy, but thankfully, the public (and science) won the main battle. For NEJM to turn around now and boast that they are releasing articles after an embargo period, without mentioning this requirement, is hypocritical, to say the least. Believe me, if the NIH requirement disappeared (and publishers are still lobbying to get rid of it!), NEJM would happily go back to keeping all access restricted to subscribers.

The battle is far from over. Open access advocates still want to see research released immediately, not after a 6-month or 12-month embargo, and that's precisely what the European Plan S will do."

THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

James Vincent, The Verge; 

THE PROBLEM WITH AI ETHICS

Is Big Tech’s embrace of AI ethics boards actually helping anyone?


"Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

Google’s brand-new AI ethics board is already falling apart; Vox, April 3, 2019

Kelsey Piper, Vox; Google’s brand-new AI ethics board is already falling apart

"Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

Altogether, it’s not the most promising start for the board.

The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one."

Monday, April 1, 2019

Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret; Forbes, March 27, 2019

Sam Shead, Forbes; Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret

"Google announced a new external advisory council to keep its artificial intelligence developments in check on Wednesday, but the mysterious AI ethics board that was set up when the company bought the DeepMind AI lab in 2014 remains shrouded in mystery.

The new advisory council consists of eight members that span academia and public policy. 

"We've established an Advanced Technology External Advisory Council (ATEAC)," wrote Kent Walker SVP of global affairs at Google in a blog post on Tuesday. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." 

Here is the full list of AI advisory council members:"

Friday, March 29, 2019

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley; The Guardian, March 29, 2019

Sam Levin, The Guardian; 'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

"“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”

Universities and ethics boards could play a vital role in counteracting these trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.”

It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.

The Partnership on AI, an ethics-focused industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountability, transparency, and ethics in AI” also excludes black voices."
 

Apple Martin tells off mother Gwyneth Paltrow for sharing photo without consent ; The Guardian, March 28, 2019

Kate Lyons, The Guardian;

Apple Martin tells off mother Gwyneth Paltrow for sharing photo without consent

 

"Paltrow posted a photo to Instagram earlier in the week of herself with Apple Martin, her 14-year-old daughter with Coldplay singer Chris Martin, at a ski field. Apple’s face is largely covered by ski goggles.

Apple commented on the post: “Mom we have discussed this. You may not post anything without my consent.”

Paltrow replied: “You can’t even see your face!”

Apple’s comment, which was later deleted, sparked debate about how much parents should share about their children’s lives on social media."

 

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers); Quartz, March 27, 2019

Olivia Goldhill, Quartz;

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers)

 

"For all their pontificating and complex moral theories, ethicists are just as disappointingly flawed as the rest of humanity. A study of 417 professors published last week in Philosophical Psychology found that, though the 151 ethics professors expressed stricter moral views, they were no better at behaving ethically."

With Vaccine Misinformation, Libraries Walk a Fine Line; Undark, March 22, 2019

Jane Roberts, Undark; 


As vanguards of intellectual freedom, public libraries face difficult questions regarding what vaccine materials to make available. How to decide?

"The decision on what to make available to library patrons — and what not to — would seem perilous territory for America’s foundational repositories of ideas, though debates over library collections are not new. Still, in an era beset by “fake news” and other artifacts of the disinformation age, libraries (and librarians) may once again find themselves facing difficult choices. One of the core values of librarianship, said Andrea Jamison, a lecturer in library science at Valparaiso University in Indiana, is upholding the principles of intellectual freedom — which include challenging censorship. “We do want to make sure we are presenting information that is accurate,” Jamison said. “But then the question becomes, who becomes the determining factor?”"

Wednesday, March 20, 2019

The New Zealand Terror Attack Shows Our Ethics Lagging Way Behind Our Technology; Forbes, March 19, 2019

Todd Essig, Forbes;

The New Zealand Terror Attack Shows Our Ethics Lagging Way Behind Our Technology


"We are failing. Collectively. Some more than others. When white nationalist terrorism struck New Zealand, after similar strikes in Norway, Pittsburgh and Charleston, it showed how we are failing to meet a central challenge posed by our technologically hyper-connected world. Namely, the cultural consequences of rapidly advancing technology require an equally accelerated and psychologically-informed life-long ethical education. The more things change, well, the more things have to change. We all have to do better.

Hate speech takes root and sprouts violence in the fertile ground of, as Christian Picciolini describes in White American Youth: My Descent into America's Most Violent Hate Movement--and How I Got Out, someone searching for identity, community, and purpose. Simply put, the developed world is failing to provide good-enough experiences of “identity, community, and purpose" suitable for 21st-century techno-culture. 

The old ways for learning how to be a good, decent person no longer work, or don’t work well enough for enough people. Of course it's an incredibly complex issue. But one piece is that people are now paradoxically isolated at their screens at the same time they are globally connected everywhere with anyone they choose. This paradox creates a feeling of community but without the responsibilities of community. The complexity and consequence of being fully with another person is diminished. Opportunities for empathy shrink to a vanishing point. But empathy creates the friction we need to slow and maybe even stop hate. So hate grows."

Tuesday, March 19, 2019

Ethics, Computing, and AI: Perspectives from MIT; MIT News, March 18, 2019

MIT News;

Ethics, Computing, and AI: Perspectives from MIT

Faculty representing all five MIT schools offer views on the ethical and societal implications of new technologies.

"The MIT Stephen A. Schwarzman College of Computing will reorient the Institute to bring the power of computing and AI to all fields at MIT; allow the future of computing and AI to be shaped by all MIT disciplines; and advance research and education in ethics and public policy to help ensure that new technologies benefit the greater good.

To support ongoing planning for the new college, Dean Melissa Nobles invited faculty from all five MIT schools to offer perspectives on the societal and ethical dimensions of emerging technologies. This series presents the resulting commentaries — practical, inspiring, concerned, and clear-eyed views from an optimistic community deeply engaged with issues that are among the most consequential of our time. 

The commentaries represent diverse branches of knowledge, but they sound some common themes, including: the vision of an MIT culture in which all of us are equipped and encouraged to discern the impact and ethical implications of our endeavors."

Educators Urge Parents And High Schools To Make Ethics The Heart Of College Applications; WBUR, On Point, March 18, 2019

WBUR, On Point;

Educators Urge Parents And High Schools To Make Ethics The Heart Of College Applications

 

"A new report is calling on parents and high schools to put ethical character at the center of college admissions.

The report, though long planned, comes out as the country is still reeling from revelations that wealthy parents bribed standardized test administrators, college coaches and at least one former college trustee to admit students who might not otherwise have been qualified...

The authors make several recommendations to parents:
  1. Keep the focus on your teen. "It's critical for parents to disentangle their own wishes from their teen's wishes," the authors write.
  2. Follow your ethical GPS. The authors advise parents not to let their own voice intrude in college essays, and to not look the other way when hired tutors are over-involved in applications.
  3. Use the admissions process as an opportunity for ethical education.
  4. Be authentic. The authors recommend not sending conflicting messages to their children about what kind of college they should try to get into.
  5. Help your teen contribute to others in meaningful ways. "Service trips to distant countries or launching a new service project are ... not what matters to admissions deans," the authors say. They recommend parents focus on their children's authentic interests instead.
  6. Advocate for elevating ethical character and reducing achievement-related distress.
  7. Model and encourage gratitude."

Facebook's privacy meltdown after Cambridge Analytica is far from over; The Guardian, March 18, 2019

Siva Vaidhyanathan, The Guardian; Facebook's privacy meltdown after Cambridge Analytica is far from over

"Facebook might not be run by Bond villains. But it’s run by people who have little knowledge of or concern for democracy or the dignity of the company’s 2.3 billion users.

The privacy meltdown story should be about how one wealthy and powerful company gave our data without our permission to hundreds of companies with no transparency, oversight, or even concern about abuse. Fortunately, the story does not end with Cambridge Analytica. The United States government revealed on Wednesday that it had opened a criminal investigation into Facebook over just these practices."

Myspace loses all content uploaded before 2016; The Guardian, March 18, 2019

Alex Hern, The Guardian; Myspace loses all content uploaded before 2016 

Faulty server migration blamed for mass deletion of songs, photos and video

"Myspace, the once mighty social network, has lost every single piece of content uploaded to its site before 2016, including millions of songs, photos and videos with no other home on the internet.
 
The company is blaming a faulty server migration for the mass deletion, which appears to have happened more than a year ago, when the first reports appeared of users unable to access older content. The company has confirmed to online archivists that music has been lost permanently, dashing hopes that a backup could be used to permanently protect the collection for future generations...

Some have questioned how the embattled company, which was purchased by Time Inc in 2016, could make such a blunder."

Saturday, March 16, 2019

'I can get any novel I want in 30 seconds': can book piracy be stopped?; The Guardian, March 6, 2019

Katy Guest, The Guardian;

'I can get any novel I want in 30 seconds': can book piracy be stopped?


"The UK government’s Intellectual Property Office estimates that 17% of ebooks are consumed illegally. Generally, pirates tend to be from better-off socioeconomic groups, and aged between 30 and 60. Many use social media to ask for tips when their regular piracy website is shut down; when I contacted some, those who responded always justified it by claiming they were too poor to buy books – then tell me they read them on their e-readers, smartphones or computer screens - or that their areas lacked libraries, or they found it hard to locate books in the countries where they lived. Some felt embarrassed. Others blamed greedy authors for trying to stop them.

When we asked Guardian readers to tell us about their experiences with piracy, we had more than 130 responses from readers aged between 20 and 70. Most regularly downloaded books illegally and while some felt guilty – more than one said they only pirated “big names” and when “the author isn’t on the breadline, think Lee Child” – the majority saw nothing wrong in the practice. “Reading an author’s work is a greater compliment than ignoring it,” said one, while others claimed it was part of a greater ethos of equality, that “culture should be free to all”."