Thursday, April 25, 2019

Made in China, Exported to the World: The Surveillance State; The New York Times, April 24, 2019

Paul MozurJonah M. Kessel and Melissa Chan, The New York Times; Made in China, Exported to the World: The Surveillance State

"Under President Xi Jinping, the Chinese government has vastly expanded domestic surveillance, fueling a new generation of companies that make sophisticated technology at ever lower prices. A global infrastructure initiative is spreading that technology even further.

Ecuador shows how technology built for China’s political system is now being applied — and sometimes abused — by other governments. Today, 18 countries — including Zimbabwe, Uzbekistan, Pakistan, Kenya, the United Arab Emirates and Germany — are using Chinese-made intelligent monitoring systems, and 36 have received training in topics like “public opinion guidance,” which is typically a euphemism for censorship, according to an October report from Freedom House, a pro-democracy research group.

With China’s surveillance know-how and equipment now flowing to the world, critics warn that it could help underpin a future of tech-driven authoritarianism, potentially leading to a loss of privacy on an industrial scale. Often described as public security systems, the technologies have darker potential uses as tools of political repression."

The Legal and Ethical Implications of Using AI in Hiring; Harvard Business Review, April 25, 2019

  • Ben Dattner
  • Tomas Chamorro-Premuzic
  • Richard Buchband
  • Lucinda Schettler
  • , Harvard Business Review; 

    The Legal and Ethical Implications of Using AI in Hiring


    "Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate."

    Enron, Ethics And The Slow Death Of American Democracy; Forbes, April 24, 2019

    Ken Silverstein, Forbes; Enron, Ethics And The Slow Death Of American Democracy

    "“Ask not whether it is lawful but whether it is ethical and moral,” says Todd Haugh, professor of business ethics at Indiana University's Kelley School of Business, in a conversation with this reporter. “What is the right thing to do and how do we foster this? We are trying to create values and trust in the market and there are rules and obligations that extend beyond what is merely legal. In the end, organizational interests are about long-term collective success and not about short-term personal gain.”...

    The Moral of the Mueller Report
    The corollary to this familiar downfall is that of the U.S. presidency in context of the newly-released redacted version of the Mueller report. The same moral questions, in fact, have surfaced today that did so when Enron reigned: While Enron had a scripted code of conduct, it couldn’t transcend its own arrogance — that all was fair in the name of profits. Similarly, Trump has deluded himself and portions of the public that is all is fair in the name of winning.
    “One of the most disturbing things, is the idea you can do whatever you need to do so long as you don’t get punished by the legal system,” says Professor Haugh. “We have seen echoes of that ever since the 2016 election. It is how this president is said to have acted in his business and many of us consider this conduct morally wrong. It is difficult to have an ethical culture when the leader does not follow what most people consider to be moral actions.”...
    Just as Enron caused the nation to evaluate the balance between people and profits, the U.S president has forced American citizens to re-examine the boundaries between legality and morality. Good leadership isn’t about enriching the self but about bettering society and setting the tone for how organizations act. Debasing those standards is always a loser. And what’s past is prologue — a roadmap that the president is probably ignoring at democracy’s peril."

    Tuesday, April 23, 2019

    What the EU’s copyright overhaul means — and what might change for big tech; NiemanLab, Nieman Foundation at Harvard, April 22, 2019

    Marcello Rossi, NiemanLab, Nieman Foundation at Harvard; What the EU’s copyright overhaul means — and what might change for big tech

    "The activity indeed now moves to the member states. Each of the 28 countries in the EU now has two years to transpose it into its own national laws. Until we see how those laws shake out, especially in countries with struggles over press and internet freedom, both sides of the debate will likely have plenty of room to continue arguing their sides — that it marks a groundbreaking step toward a more balanced, fair internet, or that it will result in a set of legal ambiguities that threaten the freedom of the web."

    Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart; NBC News, April 19, 2019

    David Ingram and Jason Abbruzzese, NBC News; Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart

    "Facebook’s missteps have raised awareness about the possible abuse of technology, and created momentum for digital privacy laws in Congress and in state legislatures.

    “The surreptitious sharing with third parties because of some ‘gotcha’ in the terms of service is always going to upset people because it seems unfair,” said Michelle Richardson, director of the data and privacy project at the Center for Democracy & Technology.

    After the past two years, she said, “you can just see the lightbulb going off over the public’s head.”"

    OPINION: The Ethics in Journalism Act is designed to censor journalists; The Sentinel, Kennesaw State University, April 22, 2019

    Sean Eikhoff, The Sentinel, Kennesaw State University; OPINION: The Ethics in Journalism Act is designed to censor journalists

    "The Ethics in Journalism Act currently in the Georgia House of Representatives is a thinly veiled attempt to censor journalists. A government-created committee with the power to unilaterally suspend or probate journalists is a dangerous concept and was exactly the sort of institution the framers sought to avoid when establishing freedom of the press.

    The bill, HB 734, is sponsored by six Republicans and would create a Journalism Ethics Board with nine members appointed by Steve Wrigley, the chancellor of the University of Georgia. This board would be tasked to create a process by which journalists “may be investigated and sanctioned for violating such canons of ethics for journalists to include, but not be limited to, loss or suspension of accreditation, probation, public reprimand and private reprimand.”

    The bill is an attempt to violate journalists’ first amendment rights and leave the chance of government punishing journalists for reporting the truth."

    Monday, April 22, 2019

    Wary of Chinese Espionage, Houston Cancer Center Chose to Fire 3 Scientists; The New York Times, April 22, 2019

    Mihir Zaveri, The New York Times; Wary of Chinese Espionage, Houston Cancer Center Chose to Fire 3 Scientists

    "“A small but significant number of individuals are working with government sponsorship to exfiltrate intellectual property that has been created with the support of U.S. taxpayers, private donors and industry collaborators,” Dr. Peter Pisters, the center’s president, said in a statement on Sunday.

    “At risk is America’s internationally acclaimed system of funding biomedical research, which is based on the principles of trust, integrity and merit.”

    The N.I.H. had also flagged two other researchers at MD Anderson. One investigation is proceeding, the center said, and the evidence did not warrant firing the other researcher.

    The news of the firings was first reported by The Houston Chronicle and Science magazine.

    The investigations began after Francis S. Collins, the director of the National Institutes of Health, sent a letter in August to more than 10,000 institutions the agency funds, warning of “threats to the integrity of U.S. biomedical research.”"

    Iancu v. Brunetti Oral Argument; C-SPAN, April 15, 2019

    April 15, 2019, C-SPAN; 

    "Iancu v. Brunetti Oral Argument

    The Supreme Court heard oral argument for Iancu v. Brunetti, a case concerning trademark law and the ban of “scandalous” and “immoral” trademarks. Erik Brunetti founded a streetwear brand called “FUCT” back in 1990. Since then, he’s attempted to trademark it but with no success. Under the Lanham Act, the U.S. Patent and Trade Office (USPTO) can refuse an application if it considers it to be “immoral” or “scandalous” and that’s exactly what happened here. The USPTO Trademark Trial and Appeal Board also reviewed the application and they too agreed that the mark was “scandalous” and very similar to the word “fucked.” The board also cited that “FUCT” was used on products with sexual imagery and public interpretation of it was “an unmistakable aura of negative sexual connotations.” Mr. Brunetti’s legal team argued that this is in direct violation of his first amendment rights to free speech and private expression. Furthermore, they said speech should be protected under the First Amendment even if one is in disagreement with it. This case eventually came before the U.S. Court of Appeals for the Federal Circuit. They ruled in favor of Mr. Brunetti. The federal government then filed an appeal with the Supreme Court. The justices will now decide whether the Lanham Act banning “immoral” or “scandalous” trademarks is unconstitutional."

    Tech giants are seeking help on AI ethics. Where they seek it matters; Quartz, March 30, 2019

    Dave Gershgorn, Quartz; Tech giants are seeking help on AI ethics. Where they seek it matters

    "Meanwhile, as Quartz reported last week, Stanford’s new Institute for Human-Centered Artificial Intelligence excluded from its faculty any significant number of people of color, some of whom have played key roles in creating the field of AI ethics and algorithmic accountability.

    Other tech companies are also seeking input on AI ethics, including Amazon, which this week announced a $10 million grant in partnership with the National Science Foundation. The funding will support research into fairness in AI."

    A New Model For AI Ethics In R&D; Forbes, March 27, 2019

    Cansu Canca, Forbes; 

    A New Model For AI Ethics In R&D


    "The ethical framework that evolved for biomedical research—namely, the ethics oversight and compliance model—was developed in reaction to the horrors arising from biomedical research during World War II and which continued all the way into the ’70s.

    In response, bioethics principles and ethics review boards guided by these principles were established to prevent unethical research. In the process, these boards were given a heavy hand to regulate research without checks and balances to control them. Despite deep theoretical weaknesses in its framework and massive practical problems in its implementation, this became the default ethics governance model, perhaps due to the lack of competition.

    The framework now emerging for AI ethics resembles this model closely. In fact, the latest set of AI principles—drafted by AI4People and forming the basis for the Draft Ethics Guidelines of the European Commission’s High-Level Expert Group on AI—evaluates 47 proposed principles and condenses them into just five.

    Four of these are exactly the same as traditional bioethics principles: respect for autonomy, beneficence, non-maleficence, and justice, as defined in the Belmont Report of 1979. There is just one new principle added—explicability. But even that is not really a principle itself, but rather a means of realizing the other principles. In other words, the emerging default model for AI ethics is a direct transplant of bioethics principles and ethics boards to AI ethics. Unfortunately, it leaves much to be desired for effective and meaningful integration of ethics into the field of AI."

    Thursday, April 18, 2019

    'Disastrous' lack of diversity in AI industry perpetuates bias, study finds; The Guardian, April 16, 2019

    Kari Paul, The Guardian;

    'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

    "Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

    The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said...

    The report released on Tuesday cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually. The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels."

    Privacy Is Too Big to Understand; The New York Times, April 16, 2019

    Charlie Warzel, The New York Times; Privacy Is Too Big to Understand

    At its heart, privacy is about how data is used to take away our control. 

    "Privacy Is Too Big to Understand

    “Privacy” is an impoverished word — far too small a word to describe what we talk about when we talk about the mining, transmission, storing, buying, selling, use and misuse of our personal information.

    Ethics Alone Can’t Fix Big Tech Ethics can provide blueprints for good tech, but it can’t implement them.; Slate, April 17, 2019

    Daniel Susser, Slate;

    Ethics Alone Can’t Fix Big Tech


    Ethics can provide blueprints for good tech, but it can’t implement them.



    "Ethics requires more than rote compliance. And it’s important to remember that industry can reduce any strategy to theater. Simply focusing on law and policy won’t solve these problems, since they are equally (if not more) susceptible to watering down. Many are rightly excited about new proposals for state and federal privacy legislation, and for laws constraining facial recognition technology, but we’re already seeing industry lobbying to strip them of their most meaningful provisions. More importantly, law and policy evolve too slowly to keep up with the latest challenges technology throws at us, as is evident from the fact that most existing federal privacy legislation is older than the internet.

    The way forward is to see these strategies as complementary, each offering distinctive and necessary tools for steering new and emerging technologies toward shared ends. The task is fitting them together.

    By its very nature ethics is idealistic. The purpose of ethical reflection is to understand how we ought to live—which principles should drive us and which rules should constrain us. However, it is more or less indifferent to the vagaries of market forces and political winds. To oversimplify: Ethics can provide blueprints for good tech, but it can’t implement them. In contrast, law and policy are creatures of the here and now. They aim to shape the future, but they are subject to the brute realities—social, political, economic, historical—from which they emerge. What they lack in idealism, though, is made up for in effectiveness. Unlike ethics, law and policy are backed by the coercive force of the state."

    Tuesday, April 16, 2019

    Course organized by students tackles ethics in CS; Janet Chang, April 15, 2019

    Janet Chang, The Brown Daily Herald; 

    Course organized by students tackles ethics in CS


    "Last spring, students in a new computer science social change course developed software tools for a disaster relief organization to teach refugee children about science and technology, a Chrome extension to filter hate speech on the internet and a mobile app to help doctors during a patient visits.

    Called CSCI 1951I: “CS for Social Change,” the course — now in its second iteration — was developed for computer science, design and engineering students to discuss and reflect on the social impact of their work while building practical software tools to help local and national partner nonprofits over the 15-week semester.

    The course was initially conceived by Nikita Ramoji ’20, among others, who was a co-founder of CS for Social Change, a student organization that aims to addethics education to college computer science departments. “The (general consensus) was that we were getting a really great computer science education, but we didn’t really have that social component,” she said."

    Monday, April 15, 2019

    EU approves tougher EU copyright rules in blow to Google, Facebook; Reuters, April 15, 2019

    Foo Yun Chee, Reuters; EU approves tougher EU copyright rules in blow to Google, Facebook

    "Under the new rules, Google and other online platforms will have to sign licensing agreements with musicians, performers, authors, news publishers and journalists to use their work.

    The European Parliament gave a green light last month to a proposal that has pitted Europe’s creative industry against tech companies, internet activists and consumer groups."

    Sunday, April 14, 2019

    Europe's Quest For Ethics In Artificial Intelligence; Forbes, April 11, 2019

    Andrea Renda, Forbes; Europe's Quest For Ethics In Artificial Intelligence

    "This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

    Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced."

    Studying Ethics Across Disciplines; Lehigh News, April 10, 2019

    Madison Hoff, Lehigh News;

    Studying Ethics Across Disciplines

    Undergraduates explore ethical issues in health, education, finance, computers and the environment at Lehigh’s third annual ethics symposium.

    "The event was hosted for the first time by Lehigh’s new Center for Ethics and made possible by The Endowment Fund for the Teaching of Ethical Decision-Making. The philosophy honor society Phi Sigma Tau also helped organize the symposium, which allowed students to share their research work on ethical problems in or outside their field of study.

    “Without opportunities for Lehigh undergrads to study ethical issues and to engage in informed thinking and discussion of them, they won’t be well-prepared to take on these challenges and respond to them well,” said Professor Robin Dillon, director of the Lehigh Center of Ethics. “The symposium is one of the opportunities the [Center of Ethics] provides.” 

    Awards were given to the best presentation from each of the three colleges and a grand prize. This year, the judges were so impressed with the quality of the presentations that they decided to award two grand prizes for the best presentation of the symposium category.

    Harry W. Ossolinski ’20 and Patricia Sittikul ’19 both won the grand prize. 

    As a computer science student, Sittikul researched the ethics behind automated home devices and social media, such as Tumblr and Reddit. Sittikul looked at privacy and censorship issues  and whether the outlets are beneficial.

    Sittikul said the developers of the devices and apps should be held accountable for the ethical issues that arise. She said she has seen some companies look for solutions to ethical problems. 

    “I think it's incredibly important to look at ethical questions as a computer scientist because when you are working on technology, you are impacting so many people whether you know it or not,” Sittikul said.""

    He wants to trademark a brand name that sounds like the F-word. The Supreme Court is listening.; The Washington Post, April 13, 2019

    Robert Barnes, The Washington Post; He wants to trademark a brand name that sounds like the F-word. The Supreme Court is listening.

    "Brunetti is challenging a neighboring provision in the law, which prohibits the registration of “immoral” or “scandalous” trademarks. And his odds look good."

    Friday, April 12, 2019

    The open access research model is hurting academics in poorer countries; Quartz, April 12, 2019

    Brenda Wingfield, University of Pretoria & Bob Millar, University of Pretoria, Quartz; The open access research model is hurting academics in poorer countries

    "There is however, little focus on the costs of open access to researchers in the developing world. Most people we have spoken to inside academia are under the impression that these costs are waived. But that’s only the case for some journals in 47 of the world’s “least developed” nations; researchers in the 58 other countries in the developing world must pay the full price...

    The cost of a PlosOne article is 20% of the cost of a Masters student’s scholarship. So the choice is “do I give a Masters student a scholarship, or publish more in open access journals?” We are trying to do both and we are sure that’s the approach many research programs are trying to take. But as more journals take the open access route this is going to be more difficult. In future, if we want to publish more articles in open access journals, we will have to reduce the number of Masters, Doctoral and post doctoral students in our programs."

    Thursday, April 11, 2019

    How The Times Thinks About Privacy; The New York Times, April 10, 2019

    A.G. Sulzberger, The New York Times; How The Times Thinks About Privacy

    We’re examining our policies and practices around data, too. 

    "The Times is committed to continue taking steps to increase transparency and protections. And our journalists will do their part to ensure that the public and policymakers are fully informed by covering these issues aggressively, fairly and accurately. Over the coming months, The Privacy Project will feature reporters investigating how digital privacy is being compromised, Op-Ed editors bringing in outside voices to help foster debate and contextualize trade-offs, and opinion writers calling for solutions. All of us at The Times will be reading closely as well, using their findings to help inform the continuing evolution of our own policies and practices."

    Do You Know What You’ve Given Up?; The New York Times, April 10, 2019

    James Bennet, The New York Times; Do You Know What You’ve Given Up?

    ""It seems like a good moment to pause and consider the choices we’ve already made, and the ones that lie ahead. That’s why Times Opinion is launching The Privacy Project, a monthslong initiative to explore the technology, to envision where it’s taking us, and to convene debate about how we should control it to best realize, rather than stunt or distort, human potential."

    It's Time to Panic About Privacy; The New York Times, April 10, 2019

    Farhad Manjoo, The New York Times; It's Time to Panic About Privacy

    "Here is the stark truth: We in the West are building a surveillance state no less totalitarian than the one the Chinese government is rigging up.

    But while China is doing it through government...we are doing it through corporations and consumer products, in the absence of any real regulation that recognizes the stakes at hand.

    It is time start caring about the mess of digital privacy. In fact it's time to panic."

    Nobel laureate takes stance against allowing research to be intellectual property; The Auburn Plainsman, April 11, 2019

    Trice Brown, The Auburn Plainsman; Nobel laureate takes stance against allowing research to be intellectual property

    "George Smith, recipient of a 2018 Nobel Prize for Chemistry, spoke to a crowd of students and faculty about the problems that arise from making publicly funded research intellectual property.

    Smith said one of the greatest problems facing the scientific research community is the ability of universities to claim intellectual property rights on publicly funded research.

    “I think that all research ought not to have intellectual — not to be intellectual property,” Smith said. “It’s the property of everyone.”"

    Wednesday, April 10, 2019

    A board to oversee Georgia journalists sounds like Orwellian fiction. The proposal is all too real.; The Washington Post, April 8, 2019

    Margaret Sullivan, The Washington Post; A board to oversee Georgia journalists sounds like Orwellian fiction. The proposal is all too real.

    "Granted, journalists are far from perfect, and their practices deserve to be held to reasonable standards. But there already is pretty good agreement about journalistic ethics, available for all to see.

    Respectable news organizations have codes of ethics — many of them available to the public. The Society of Professional Journalists has a well-accepted code as well."

    Tuesday, April 9, 2019

    Why we can’t leave Grindr under Chinese control; The Washington Post, April 9, 2019

    Isaac Stone Fish, The Washington Post; Why we can’t leave Grindr under Chinese control

    "Because a Chinese company now oversees Grindr’s data, photographs and messages, that means the [Chinese Communist] Party can, if it chooses to do so, access all of that information, regardless of where it’s stored. And that data includes compromising photos and messages from some of America’s most powerful men — some openly gay, and some closeted.

    Couple this with China’s progress in developing big data and facial recognition software, industries more advanced there than in the United States, and there are some concerning national security implications of a Chinese-owned Grindr. In other words, Beijing could now exploit compromising photos of millions of Americans. Think what a creative team of Chinese security forces could do with its access to Grindr’s data."

    Pride and profit: Why Mayan weavers fight for intellectual property rights; The Christian Science Monitor, March 27, 2019

    , The Christian Science Monitor;

    Pride and profit: Why Mayan weavers fight for intellectual property rights

    Why We Wrote This

    Who owns culture, if anyone? It’s a complicated question that can seem almost theoretical. But its real-life consequences are keenly felt by many traditional artisans.

    "Dr. Little fears that looking at textile design through the lens of fashion essentially “freezes it in time as a kind of folk art or folk material and that doesn’t allow it to actually live.”

    “I think of [weaving] like a language,” he adds. Among indigenous communities, “it’s more vibrant when everyone is using it, fooling around with it, taking from others, and making new combinations. Vibrancy in language indicates strength, and in textiles it’s the same way.”"

    Real or artificial? Tech titans declare AI ethics concerns; AP, April 7, 2019

    Matt O'Brien and Rachel Lerman, AP; Real or artificial? Tech titans declare AI ethics concerns

    "The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side.

    But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.

    “Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

    But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?"

    Monday, April 8, 2019

    AI systems should be accountable, explainable, and unbiased, says EU; The Verge, April 8, 2019

    James Vincent, The Verge; AI systems should be accountable, explainable, and unbiased, says EU

    "The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

    These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology."

    Expert Panel: What even IS 'tech ethics'?; TechCrunch, April 2, 2019

    Greg Epstein, TechCrunch; Expert Panel: What even IS 'tech ethics'?

    "It’s been a pleasure, this past month, to launch a weekly series investigating issues in tech ethics, here at TechCrunch. As discussions around my first few pieces have taken off, I’ve noticed one question recurring in a number of different ways: what even IS “tech ethics”? I believe there’s lots of room for debate about what this growing field entails, and I hope that remains the case because we’re going to need multiple ethical perspectives on technologies that are changing billions of lives. That said, we need to at least attempt to define what we’re talking about, in order to have clearer public conversations about the ethics of technology."

    Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

    John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

    "No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

    Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

    The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

    Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

    Circumcision, patient trackers and torture: my job in medical ethics; The Guardian, April 8, 2019

    Julian Sheather, The Guardian; Circumcision, patient trackers and torture: my job in medical ethics

    "Monday

    Modern healthcare is full of ethical problems. Some are intensely practical, such as whether we can withdraw a feeding tube from a patient in a vegetative state who could go on living for many years, or whether a GP should give a police officer access to patient records following a local rape. 

    Others are more speculative and future-oriented: will robots become carers, and would that be a bad thing? And then there are the political questions, like whether the Home Office should have access to patient records. My job is to advise the British Medical Association on how we navigate these issues and make sure the theory works in practice for patients and healthcare professionals."

    Sunday, April 7, 2019

    Hey Google, sorry you lost your ethics council, so we made one for you; MIT Technology Review, April 6, 2019

    Bobbie Johnson and Gideon Lichfield, MIT Technology Review; Hey Google, sorry you lost your ethics council, so we made one for you

    "Well, that didn’t take long. After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC—a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.

    How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome."

    Thursday, April 4, 2019

    Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.; Forbes, April 1, 2019

    Steven Salzberg, Forbes; Highly Profitable Medical Journal Says Open Access Publishing Has Failed. Right.

    "What Haug doesn't mention here is that there is one reason (and only one, I would argue) that NEJM makes all of its articles freely available after some time has passed: the NIH requires it. This dates back to 2009, when Congress passed a law, after intense pressure from citizens who were demanding access to the research results that they'd paid for, requiring all NIH-funded results to be deposited in a free, public repository (now called PubMed Central) within 12 months of publication.

    Scientific publishers fought furiously against this policy. I know, because I was there, and I talked to many people involved in the fight at the time. The open-access advocates (mostly patient groups) wanted articles to be made freely available immediately, and they worked out a compromise where the journals could have 6 months of exclusivity. At the last minute, the NIH Director at the time, Elias Zerhouni, extended this to 12 months, for reasons that remain shrouded in secrecy, but thankfully, the public (and science) won the main battle. For NEJM to turn around now and boast that they are releasing articles after an embargo period, without mentioning this requirement, is hypocritical, to say the least. Believe me, if the NIH requirement disappeared (and publishers are still lobbying to get rid of it!), NEJM would happily go back to keeping all access restricted to subscribers.

    The battle is far from over. Open access advocates still want to see research released immediately, not after a 6-month or 12-month embargo, and that's precisely what the European Plan S will do."

    THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

    James Vincent, The Verge; 

    THE PROBLEM WITH AI ETHICS

    Is Big Tech’s embrace of AI ethics boards actually helping anyone?


    "Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

    “It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

    The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

    Google’s brand-new AI ethics board is already falling apart; Vox, April 3, 2019

    Kelsey Piper, Vox; Google’s brand-new AI ethics board is already falling apart

    "Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

    James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

    Altogether, it’s not the most promising start for the board.

    The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

    A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one."

    Monday, April 1, 2019

    Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret; Forbes, March 27, 2019

    Sam Shead, Forbes; Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret

    "Google announced a new external advisory council to keep its artificial intelligence developments in check on Wednesday, but the mysterious AI ethics board that was set up when the company bought the DeepMind AI lab in 2014 remains shrouded in mystery.

    The new advisory council consists of eight members that span academia and public policy. 

    "We've established an Advanced Technology External Advisory Council (ATEAC)," wrote Kent Walker SVP of global affairs at Google in a blog post on Tuesday. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." 

    Here is the full list of AI advisory council members:"