Monday, December 31, 2018

Question Technology; Kip Currier, Ethics in a Tangled Web, December 31, 2018


Kip Currier; Question Technology

Ars Technica’s Timothy B. Lee’s 12/30/18 “The hype around driverless cars came crashing down in 2018” is a highly recommended overview of the annus horribilis the year that’s ending constituted for the self-driving vehicles industry. Lee references the Gartner consulting group’s "hype cycle" for new innovations and technology:

In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality. 

We’ve seen the hype cycle replayed over and over again throughout the World Wide Web age (and throughout recorded history), albeit with new players and new innovations. Sometimes the hype delivers. Sometimes it comes with an unexpected price tag and consequences. Social media was hyped by many through branding and slogans. It offers benefits; chief among them, expanded opportunities for communication and connection. But it also has significant weaknesses that can and have been exploited by actors foreign and domestic.

Since 2016, as example, we’ve acutely learned—and are still learning—how social media, such as Facebook, can be used to weaponize information, misinform citizenry, and subvert democracy. From Facebook’s “inflated expectations” Version 1.0 through multiple iterations of hype and rehype, to its 2018 “trough of disillusionment”--which may or may not represent its nadir--much of the public’s perceptions of Facebook appear to finally be aligning with a more realistic picture of the company’s technology, as well as its less than transparent and accountable leadership. Indeed, consider how many times this year, and in the preceding decade and a half, Planet Earth’s social media-using citizens have heard Facebook CEO Mark Zuckerberg essentially say some version of “Trust me. Trust Facebook. We’re going to fix this.” (See CNBC’s 12/19/18 well-documented “Mark Zuckerberg has been talking and apologizing about privacy since 2003 — here’s a reminder of what he’s said) Only for the public, like Charlie Brown, to have the proverbial football once again yanked away with seemingly never-ending revelations of deliberate omissions by Facebook leadership concerning users’ data collection and use.

To better grasp the impacts and lessons we can learn from recognition of the hype cycle, it’s useful to remind ourselves of some other near-recent examples of highly-hyped technologies:

In the past decade, many talked about "the death of the print book"—supplanted by the ebook—and the extinction of independent (i.e. non-Amazon) booksellers. Now, print books are thriving again and independent bookstores are making a gradual comeback in some communities. See the 11/3/18 Observer article "Are E-Books Finally Over? The Publishing Industry Unexpectedly Tilts Back to Print" and Vox’s 12/18/18 piece, “Instagram is helping save the indie bookstore”.

More recently, Mass Open Online Courses (MOOCs) were touted as the game-changer that would have higher education quaking in its ivory tower-climbing boots. See Thomas L. Friedman's 2013 New York Times Opinion piece "Revolution Hits the Universities"; five years later, in 2018, a MOOCs-driven revolution seems less inevitable, or perhaps even less desirable, than postulated when MOOCs had become all the rage in some quarters. Even a few months before Friedman’s article, his New York Times employer had declared 2012 as “The Year of the MOOC”. In pertinent part from that article:


“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”

MOOCs have been around for a few years as collaborative techie learning events, but this is the year everyone wants in. [Note to the author: you might just want to qualify and/or substantiate that hyperbolic assertion a bit about “everyone”!] Elite universities are partnering with Coursera at a furious pace. It now offers courses from 33 of the biggest names in postsecondary education, including Princeton, Brown, Columbia and Duke. In September, Google unleashed a MOOC-building online tool, and Stanford unveiled Class2Go with two courses.

Nick McKeown is teaching one of them, on computer networking, with Philip Levis (the one with a shock of magenta hair in the introductory video). Dr. McKeown sums up the energy of this grand experiment when he gushes, “We’re both very excited.” 

But read on, to the very next two sentences in the piece:

Casually draped over auditorium seats, the professors also acknowledge that they are not exactly sure how this MOOC stuff works.

“We are just going to see how this goes over the next few weeks,” says Dr. McKeown.

Yes, you read that right: 

“…they are not exactly sure how this MOOC stuff works.” And ““We are just going to see how this goes over the next few weeks,” says Dr. McKeown.”

Now, in 2018, who is even talking about MOOCs? Certainly, MOOCs are neither totally dead nor completely out of the education picture. But the fever pitch exhortations around the 1st Coming of the MOOC have ebbed, as hype machines—and change consultants—have inevitably moved on to “the next bright shiny object”.

Technology has many good points, as well as bad points, and, shall we say, aspects that cause legitimate concern. It’s here to stay. I get that. Appreciating the many positive aspects of technology in our lives does not mean that we can’t and shouldn’t still ask questions about the adoption and use of technology. As a mentor of mine often points out, society frequently pushes people to make binary choices, to select either X or Y, when we may, rather, select X and Y. The phrase Question Authority was popularized in the boundary-changing 1960’s. Its pedigree is murky and may actually trace back to ancient Greek society. That’s a topic for another piece by someone else. But the phrase, modified to Question Technology, can serve as an inspirational springboard for today. 

Happily, 2018 also saw more and more calls for AI ethics, data ethics, ethics courses in computer science and other educational programs, and more permutations of ethics in technology. (And that’s not even getting at all the calls for ethics in government!) Arguably, 2018 was the year that ethics was writ large.

In sum, we need to remind ourselves to be wary of anyone or any entity touting that they know with absolute certainty what a new technology will or will not do today, a year from now, or 10+ years in the fast-moving future, particularly absent the provision of hard evidence to support such claims. Just because someone says it’s so doesn’t make it so. Or, that it should be so.

In this era of digitally-dispersed disinformation, misinformation, and “alternate facts”, we all need to remind ourselves to think critically, question pronouncements and projections, and verify the truthfulness of assertions with evidence-based analysis and bonafide facts.


The hype around driverless cars came crashing down in 2018; Ars Technica, December 30, 2018

Timothy B. Lee, Ars Technica; The hype around driverless cars came crashing down in 2018

"In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality."

Sunday, December 30, 2018

Colleges Grapple With Teaching the Technology and Ethics of A.I.; The New York Times, November 2, 2018

Alina Tugend, The New York Times;Colleges Grapple With Teaching the Technology and Ethics of A.I.


"At the University of Washington, a new class called “Intelligent Machinery, Identity and Ethics,” is being taught this fall by a team leader at Google and the co-director of the university’s Computational Neuroscience program.

Daniel Grossman, a professor and deputy director of undergraduate studies at the university’s Paul G. Allen School of Computer Science and Engineering, explained the purpose this way:

The course “aims to get at the big ethical questions we’ll be facing, not just in the next year or two but in the next decade or two.”

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said."

Defending ‘Needles in the Sewer’ and Photographing the Disadvantaged; PetaPixel, December 29, 2018

Simon King, PetaPixel; Defending ‘Needles in the Sewer’ and Photographing the Disadvantaged

[Kip Currier: Thought-provoking article identifying and discussing some of the sticky ethical issues of whether to-photograph or not-to-photograph, particularly regarding vulnerable populations and difficult topics. Kudos to the photographer Simon King for shedding light on his metacognition (i.e. thinking about thinking), with regard to how and when he takes pictures and what he does and does not do with them.

Beyond photography, the issues raised in the piece have broader implications as well for digital age technologies' impacts on disadvantaged communities related to the increasing collection and use of data generated by AI algorithms, mass surveillance, facial recognition, biometric information, etc. The last two paragraphs of a November 2018 New York Times article, Colleges Grapple With Teaching the Technology and Ethics of A.I., provide an example of some of the ways higher education is preparing students to better recognize and address these issues:

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said.]



"The main issues people brought up about this image were consent and exploitation...

My responsibility (and maybe yours?) as a photographer is to avoid self-censorship. I can always choose to publish an image or not, but only if that image exists in the first place. If I take an image then I should have the presence of mind to understand what I saw in that scene, and what purpose I want to apply to that image. If I had not taken an image at this time would that be a form of erasing and ignoring this issue? I would rather face discussion and debate about my work than to talk as if these issues are distant and abstract.

Thanks for taking the time to read this. I’d like to direct some of the attention from this topic and image to the website Addaction. It’s a UK-based organization providing aid and outreach to at-risk addicts. Please consider having a look at their website and possibly making a donation, or maybe going out of your way to produce an image that may also draw attention to this topic."

Friday, December 28, 2018

From ethics to accountability, this is how AI will suck less in 2019; Wired, December 27, 2018

Emma Byrne, Wired; From ethics to accountability, this is how AI will suck less in 2019

"If 2018 brought artificial intelligence systems into our homes, 2019 will be the year we think about their place in our lives. Next year, AIs will take on an even greater role: predicting how our climate will change, advising us on our health and controlling our spending. Conversational assistants will do more than ever at our command, but only if businesses and nation states become more transparent about their use. Until now, AIs have remained black boxes. In 2019, they will start to open up.

The coming year is also going to be the year that changes the way we talk about AI. From wide-eyed techno-lust or trembling anticipation of Roko's basilisk, by the end of next year, wild speculation about the future of AI will be replaced by hard decisions about ethics and democracy; 2019 will be the year that AI grows up."

Thursday, December 27, 2018

Tech Ethics Issues We Should All Be Thinking About In 2019; Forbes, December 27, 2018

Jessica Baron, Forbes; Tech Ethics Issues We Should All Be Thinking About In 2019

"For the seventh year in a row, I've released a list of ten technologies that people should be aware of in the hopes of giving non-experts a window into what's going on in labs around the world. The goal has always been to raise some of the ethical and policy issues that surround these technologies, not to scare anyone, but to drive home just how much the average American might be unaware of when it comes to what's coming down the pipeline or already in their homes, potentially doing harm...

In 2019, the list includes some technology you've definitely heard of (such as 5G) and some that will come as a surprise. If you'd like to see lists from previous years (as well as some further reading recommendations), you can go here. In the meantime, here are the 2019 entries for the Tech Top 10 List:"

Why It's Been A Dismal Year For Ethics In Washington; NPR, December 25, 2018

Peter Overby, NPR; Why It's Been A Dismal Year For Ethics In Washington

"Even setting aside the investigations by special counsel Robert Mueller and other federal prosecutors, Washington had more than its usual list of scandals in 2018."

Sunday, December 23, 2018

Their Art Raised Questions About Technology. Chinese Censors Had Their Own Answer.; The New York Times, December 14, 2018

Amy Qin, The New York Times; Their Art Raised Questions About Technology. Chinese Censors Had Their Own Answer.

Artificial intelligence bots. 3-D printed human organs. Genomic sequencing. 

These might seem to be natural topics of interest in a country determined to be the world’s leader in science and technology. But in China, where censors are known to take a heavy hand, several artworks that look closely at these breakthroughs have been deemed taboo by local cultural officials.

The works, which raise questions about the social and ethical implications of artificial intelligence and biotechnology, were abruptly pulled last weekend from the coming Guangzhou Triennial on the orders of cultural authorities in the southern Chinese province of Guangdong...

“Isn’t contemporary art meant to raise questions, and start discussions about important subjects in actuality and those of our near future?” he wrote. “What are China’s reasons for organizing all these big expensive ‘contemporary art’ manifestations if these questions, the core of contemporary art, freedom of speech, freedom of mind, are ignored and undermined?”"

It’s About Ethics in Comic Book Journalism: The Politics of X-Men: Red; Comic Watch, December 19, 2018

Bethany W. Pope, Comic Watch; It’s About Ethics in Comic Book Journalism: The Politics of X-Men: Red

"The central thesis of these eleven issues is that the act of compassion is a more powerful tool than the most brutally cinematic superpower. Empathy is the thing which slaughters fear. Looking at your enemy and seeing a person, woven through with hopes and loves, fears, the usual mixture of frailties, transforms disparate (possibly violent) mobs into a functional community by revealing that there is no ‘us versus them’. There’s only ‘us’. The X-Men are the perfect superhero group to make this point, because their entire existence is predicated on the phrase ‘protecting a world which fears and hates them’. The X-Men have always represented the struggle that othered groups (racial minorities, religious minorities, women, members of the LGBTQIA community) have faced when trying to live in function in a world that is slanted, dramatically, in favor of straight, white (American) men. Such a group is a necessary force in the current, fractured, geo-political climate.

The world needs a message of hope and unity in a time when real children (mostly brown) are being locked in cages at the border of America. And Western audiences, who are either complacent in their ignorance or else furious at their own seeming impotence, need to understand the ways in which their outlook, their opinions are being manipulated so that their complacency is undisturbed and their hatreds are intentionally focused against highly specified targets. Allegory has always been a gentle way to deliver a clear shot of truth, and the technique has functioned perfectly in this series...

In this run, Taylor assembled a team which was primarily composed of characters who are valued for their empathy and capacity for forgiveness."