Monday, December 31, 2018

Question Technology; Kip Currier, Ethics in a Tangled Web, December 31, 2018


Kip Currier; Question Technology

Ars Technica’s Timothy B. Lee’s 12/30/18 “The hype around driverless cars came crashing down in 2018” is a highly recommended overview of the annus horribilis the year that’s ending constituted for the self-driving vehicles industry. Lee references the Gartner consulting group’s "hype cycle" for new innovations and technology:

In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality. 

We’ve seen the hype cycle replayed over and over again throughout the World Wide Web age (and throughout recorded history), albeit with new players and new innovations. Sometimes the hype delivers. Sometimes it comes with an unexpected price tag and consequences. Social media was hyped by many through branding and slogans. It offers benefits; chief among them, expanded opportunities for communication and connection. But it also has significant weaknesses that can and have been exploited by actors foreign and domestic.

Since 2016, as example, we’ve acutely learned—and are still learning—how social media, such as Facebook, can be used to weaponize information, misinform citizenry, and subvert democracy. From Facebook’s “inflated expectations” Version 1.0 through multiple iterations of hype and rehype, to its 2018 “trough of disillusionment”--which may or may not represent its nadir--much of the public’s perceptions of Facebook appear to finally be aligning with a more realistic picture of the company’s technology, as well as its less than transparent and accountable leadership. Indeed, consider how many times this year, and in the preceding decade and a half, Planet Earth’s social media-using citizens have heard Facebook CEO Mark Zuckerberg essentially say some version of “Trust me. Trust Facebook. We’re going to fix this.” (See CNBC’s 12/19/18 well-documented “Mark Zuckerberg has been talking and apologizing about privacy since 2003 — here’s a reminder of what he’s said) Only for the public, like Charlie Brown, to have the proverbial football once again yanked away with seemingly never-ending revelations of deliberate omissions by Facebook leadership concerning users’ data collection and use.

To better grasp the impacts and lessons we can learn from recognition of the hype cycle, it’s useful to remind ourselves of some other near-recent examples of highly-hyped technologies:

In the past decade, many talked about "the death of the print book"—supplanted by the ebook—and the extinction of independent (i.e. non-Amazon) booksellers. Now, print books are thriving again and independent bookstores are making a gradual comeback in some communities. See the 11/3/18 Observer article "Are E-Books Finally Over? The Publishing Industry Unexpectedly Tilts Back to Print" and Vox’s 12/18/18 piece, “Instagram is helping save the indie bookstore”.

More recently, Mass Open Online Courses (MOOCs) were touted as the game-changer that would have higher education quaking in its ivory tower-climbing boots. See Thomas L. Friedman's 2013 New York Times Opinion piece "Revolution Hits the Universities"; five years later, in 2018, a MOOCs-driven revolution seems less inevitable, or perhaps even less desirable, than postulated when MOOCs had become all the rage in some quarters. Even a few months before Friedman’s article, his New York Times employer had declared 2012 as “The Year of the MOOC”. In pertinent part from that article:


“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”

MOOCs have been around for a few years as collaborative techie learning events, but this is the year everyone wants in. [Note to the author: you might just want to qualify and/or substantiate that hyperbolic assertion a bit about “everyone”!] Elite universities are partnering with Coursera at a furious pace. It now offers courses from 33 of the biggest names in postsecondary education, including Princeton, Brown, Columbia and Duke. In September, Google unleashed a MOOC-building online tool, and Stanford unveiled Class2Go with two courses.

Nick McKeown is teaching one of them, on computer networking, with Philip Levis (the one with a shock of magenta hair in the introductory video). Dr. McKeown sums up the energy of this grand experiment when he gushes, “We’re both very excited.” 

But read on, to the very next two sentences in the piece:

Casually draped over auditorium seats, the professors also acknowledge that they are not exactly sure how this MOOC stuff works.

“We are just going to see how this goes over the next few weeks,” says Dr. McKeown.

Yes, you read that right: 

“…they are not exactly sure how this MOOC stuff works.” And ““We are just going to see how this goes over the next few weeks,” says Dr. McKeown.”

Now, in 2018, who is even talking about MOOCs? Certainly, MOOCs are neither totally dead nor completely out of the education picture. But the fever pitch exhortations around the 1st Coming of the MOOC have ebbed, as hype machines—and change consultants—have inevitably moved on to “the next bright shiny object”.

Technology has many good points, as well as bad points, and, shall we say, aspects that cause legitimate concern. It’s here to stay. I get that. Appreciating the many positive aspects of technology in our lives does not mean that we can’t and shouldn’t still ask questions about the adoption and use of technology. As a mentor of mine often points out, society frequently pushes people to make binary choices, to select either X or Y, when we may, rather, select X and Y. The phrase Question Authority was popularized in the boundary-changing 1960’s. Its pedigree is murky and may actually trace back to ancient Greek society. That’s a topic for another piece by someone else. But the phrase, modified to Question Technology, can serve as an inspirational springboard for today. 

Happily, 2018 also saw more and more calls for AI ethics, data ethics, ethics courses in computer science and other educational programs, and more permutations of ethics in technology. (And that’s not even getting at all the calls for ethics in government!) Arguably, 2018 was the year that ethics was writ large.

In sum, we need to remind ourselves to be wary of anyone or any entity touting that they know with absolute certainty what a new technology will or will not do today, a year from now, or 10+ years in the fast-moving future, particularly absent the provision of hard evidence to support such claims. Just because someone says it’s so doesn’t make it so. Or, that it should be so.

In this era of digitally-dispersed disinformation, misinformation, and “alternate facts”, we all need to remind ourselves to think critically, question pronouncements and projections, and verify the truthfulness of assertions with evidence-based analysis and bonafide facts.


The hype around driverless cars came crashing down in 2018; Ars Technica, December 30, 2018

Timothy B. Lee, Ars Technica; The hype around driverless cars came crashing down in 2018

"In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality."

Sunday, December 30, 2018

Colleges Grapple With Teaching the Technology and Ethics of A.I.; The New York Times, November 2, 2018

Alina Tugend, The New York Times;Colleges Grapple With Teaching the Technology and Ethics of A.I.


"At the University of Washington, a new class called “Intelligent Machinery, Identity and Ethics,” is being taught this fall by a team leader at Google and the co-director of the university’s Computational Neuroscience program.

Daniel Grossman, a professor and deputy director of undergraduate studies at the university’s Paul G. Allen School of Computer Science and Engineering, explained the purpose this way:

The course “aims to get at the big ethical questions we’ll be facing, not just in the next year or two but in the next decade or two.”

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said."

Defending ‘Needles in the Sewer’ and Photographing the Disadvantaged; PetaPixel, December 29, 2018

Simon King, PetaPixel; Defending ‘Needles in the Sewer’ and Photographing the Disadvantaged

[Kip Currier: Thought-provoking article identifying and discussing some of the sticky ethical issues of whether to-photograph or not-to-photograph, particularly regarding vulnerable populations and difficult topics. Kudos to the photographer Simon King for shedding light on his metacognition (i.e. thinking about thinking), with regard to how and when he takes pictures and what he does and does not do with them.

Beyond photography, the issues raised in the piece have broader implications as well for digital age technologies' impacts on disadvantaged communities related to the increasing collection and use of data generated by AI algorithms, mass surveillance, facial recognition, biometric information, etc. The last two paragraphs of a November 2018 New York Times article, Colleges Grapple With Teaching the Technology and Ethics of A.I., provide an example of some of the ways higher education is preparing students to better recognize and address these issues:

David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, “A.I, Society and Humanity.” The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

“The key is to make sure they have the opportunities to really explore the ways technology can have an impact — to think how this will affect people in poorer communities or how it can be abused,” he said.]



"The main issues people brought up about this image were consent and exploitation...

My responsibility (and maybe yours?) as a photographer is to avoid self-censorship. I can always choose to publish an image or not, but only if that image exists in the first place. If I take an image then I should have the presence of mind to understand what I saw in that scene, and what purpose I want to apply to that image. If I had not taken an image at this time would that be a form of erasing and ignoring this issue? I would rather face discussion and debate about my work than to talk as if these issues are distant and abstract.

Thanks for taking the time to read this. I’d like to direct some of the attention from this topic and image to the website Addaction. It’s a UK-based organization providing aid and outreach to at-risk addicts. Please consider having a look at their website and possibly making a donation, or maybe going out of your way to produce an image that may also draw attention to this topic."

Friday, December 28, 2018

From ethics to accountability, this is how AI will suck less in 2019; Wired, December 27, 2018

Emma Byrne, Wired; From ethics to accountability, this is how AI will suck less in 2019

"If 2018 brought artificial intelligence systems into our homes, 2019 will be the year we think about their place in our lives. Next year, AIs will take on an even greater role: predicting how our climate will change, advising us on our health and controlling our spending. Conversational assistants will do more than ever at our command, but only if businesses and nation states become more transparent about their use. Until now, AIs have remained black boxes. In 2019, they will start to open up.

The coming year is also going to be the year that changes the way we talk about AI. From wide-eyed techno-lust or trembling anticipation of Roko's basilisk, by the end of next year, wild speculation about the future of AI will be replaced by hard decisions about ethics and democracy; 2019 will be the year that AI grows up."

Thursday, December 27, 2018

Tech Ethics Issues We Should All Be Thinking About In 2019; Forbes, December 27, 2018

Jessica Baron, Forbes; Tech Ethics Issues We Should All Be Thinking About In 2019

"For the seventh year in a row, I've released a list of ten technologies that people should be aware of in the hopes of giving non-experts a window into what's going on in labs around the world. The goal has always been to raise some of the ethical and policy issues that surround these technologies, not to scare anyone, but to drive home just how much the average American might be unaware of when it comes to what's coming down the pipeline or already in their homes, potentially doing harm...

In 2019, the list includes some technology you've definitely heard of (such as 5G) and some that will come as a surprise. If you'd like to see lists from previous years (as well as some further reading recommendations), you can go here. In the meantime, here are the 2019 entries for the Tech Top 10 List:"

Why It's Been A Dismal Year For Ethics In Washington; NPR, December 25, 2018

Peter Overby, NPR; Why It's Been A Dismal Year For Ethics In Washington

"Even setting aside the investigations by special counsel Robert Mueller and other federal prosecutors, Washington had more than its usual list of scandals in 2018."

Sunday, December 23, 2018

Their Art Raised Questions About Technology. Chinese Censors Had Their Own Answer.; The New York Times, December 14, 2018

Amy Qin, The New York Times; Their Art Raised Questions About Technology. Chinese Censors Had Their Own Answer.

Artificial intelligence bots. 3-D printed human organs. Genomic sequencing. 

These might seem to be natural topics of interest in a country determined to be the world’s leader in science and technology. But in China, where censors are known to take a heavy hand, several artworks that look closely at these breakthroughs have been deemed taboo by local cultural officials.

The works, which raise questions about the social and ethical implications of artificial intelligence and biotechnology, were abruptly pulled last weekend from the coming Guangzhou Triennial on the orders of cultural authorities in the southern Chinese province of Guangdong...

“Isn’t contemporary art meant to raise questions, and start discussions about important subjects in actuality and those of our near future?” he wrote. “What are China’s reasons for organizing all these big expensive ‘contemporary art’ manifestations if these questions, the core of contemporary art, freedom of speech, freedom of mind, are ignored and undermined?”"

It’s About Ethics in Comic Book Journalism: The Politics of X-Men: Red; Comic Watch, December 19, 2018

Bethany W. Pope, Comic Watch; It’s About Ethics in Comic Book Journalism: The Politics of X-Men: Red

"The central thesis of these eleven issues is that the act of compassion is a more powerful tool than the most brutally cinematic superpower. Empathy is the thing which slaughters fear. Looking at your enemy and seeing a person, woven through with hopes and loves, fears, the usual mixture of frailties, transforms disparate (possibly violent) mobs into a functional community by revealing that there is no ‘us versus them’. There’s only ‘us’. The X-Men are the perfect superhero group to make this point, because their entire existence is predicated on the phrase ‘protecting a world which fears and hates them’. The X-Men have always represented the struggle that othered groups (racial minorities, religious minorities, women, members of the LGBTQIA community) have faced when trying to live in function in a world that is slanted, dramatically, in favor of straight, white (American) men. Such a group is a necessary force in the current, fractured, geo-political climate.

The world needs a message of hope and unity in a time when real children (mostly brown) are being locked in cages at the border of America. And Western audiences, who are either complacent in their ignorance or else furious at their own seeming impotence, need to understand the ways in which their outlook, their opinions are being manipulated so that their complacency is undisturbed and their hatreds are intentionally focused against highly specified targets. Allegory has always been a gentle way to deliver a clear shot of truth, and the technique has functioned perfectly in this series...

In this run, Taylor assembled a team which was primarily composed of characters who are valued for their empathy and capacity for forgiveness."

Friday, December 21, 2018

Stan Lee Unleashed the Heroic Power of the Outcast; Wired, December 13, 2018

Adam Rogers, Wired; Stan Lee Unleashed the Heroic Power of the Outcast

"From the fantasy-pulp midden, Lee had excavated a gem of a truth: These tales about men and women in garish tights hitting each other were also about more. Super­heroes had incredible abilities, yes, but they were also often the victims of prejudice themselves, or trapped in moral webs stronger than anything Spider-­Man ever thwipped. So the comics appealed to people who felt the same, even before Lee and the other Marvel creators published the first African American heroes, the first popular Asian American heroes, and strong, leading-character women in numbers large enough to populate a dozen summer crossovers...

His death encouraged people to tell stories of Lee’s kindness and enthusiasm. But for every story that circulated after Lee’s death about how wonderful and caring he was, comics professionals tell other tales in which Lee is … not.

Every bit as complicated as the characters he helped bring into the world, Lee taught generations of nerds the concepts of responsibility, morality, and love. He waged a sometimes ham-fisted battle against prejudice, misunderstanding, and evil. This is what makes some of nerd-dom’s recent tack toward intolerance so painful; other­ishness is engineered into comics’ radioactive, mutated DNA. Even if Lee wasn’t a super human, he was super­human, empowering colleagues to leap creative obstacles and to give readers a sense of their own secret strengths."

What are tech companies doing about ethical use of data? Not much; The Conversation, November 27, 2018

, The Conversation; What are tech companies doing about ethical use of data? Not much

"Our relationship with tech companies has changed significantly over the past 18 months. Ongoing data breaches, and the revelations surrounding the Cambridge Analytica scandal, have raised concerns about who owns our data, and how it is being used and shared.

Tech companies have vowed to do better. Following his grilling by both the US Congress and the EU Parliament, Facebook CEO, Mark Zuckerberg, said Facebook will change the way it shares data with third party suppliers. There is some evidence that this is occurring, particularly with advertisers.

But have tech companies really changed their ways? After all, data is now a primary asset in the modern economy.

To find whether there’s been a significant realignment between community expectations and corporate behaviour, we analysed the data ethics principles and initiatives that various global organisations have committed since the various scandals broke.

What we found is concerning. Some of the largest organisations have not demonstrably altered practices, instead signing up to ethics initiatives that are neither enforced nor enforceable."

No, You Don’t Really Look Like That; The Atlantic, December 18, 2018

Alexis C. Madrigal, The Atlantic; No, You Don’t Really Look Like That

"The stakes can be high: Artificial intelligence makes it easy to synthesize videos into new, fictitious ones often called “deepfakes.” “We’ll shortly live in a world where our eyes routinely deceive us,” wrote my colleague Franklin Foer. “Put differently, we’re not so far from the collapse of reality.” Deepfakes are one way of melting reality; another is changing the simple phone photograph from a decent approximation of the reality we see with our eyes to something much different."

Facebook: A Case Study in Ethics ; CMS Wire, December 20, 2018

Laurence Hart, CMS Wire; Facebook: A Case Study in Ethics 

"It feels like every week, a news item emerges that could serve as a case study in ethics. A company's poor decision when exposed to the light of day (provided by the press) seems shockingly bad. The ethical choice in most cases should have been obvious, but it clearly wasn’t the one made.

This week, as in many weeks in 2018, the case study comes from Facebook."

Thursday, December 20, 2018

Why Should Anyone Believe Facebook Anymore?; Wired, December 19, 2018

Fred Vogelstein, Wired;

Why Should Anyone Believe Facebook Anymore?


"Americans are weird about their tycoons. We have a soft spot for success, especially success from people as young as Zuckerberg was when he started Facebook. But we hate it when they become as super-rich and powerful as he is now and seem accountable to no one. We'll tolerate rogues like Larry Ellison, founder and CEO of Oracle, who once happily admitted to hiring investigators to search Bill Gates' trash. Ellison makes no effort to hide the fact that he's in it for the money and the power. But what people despise more than anything is what we have now with tech companies in Silicon Valley, especially with Facebook: greed falsely wrapped in sanctimony.

Facebook gave the world a great new tool for staying connected. Zuckerberg even pitched it as a better internet—a safe space away from the anonymous trolls lurking everywhere else online. But it’s now rather debatable whether Facebook is really a better internet that is making the world a better place, or just another big powerful corporation out to make as much money as possible. Perhaps the world would be happier with Zuckerberg and Facebook, and the rest of their Silicon Valley brethren, if they stopped pretending to be people and businesses they are not."

Facebook Didn’t Sell Your Data; It Gave It Away In exchange for even more data about you from Amazon, Netflix, Spotify, Microsoft, and others; The Atlantic, December 19, 2018

Alexis C. Madrigal, The Atlantic;

Facebook Didn’t Sell Your Data; It Gave It Away


"By the looks of it, other tech players have been happy to let Facebook get beaten up while their practices went unexamined. And then, in this one story, the radioactivity of Facebook’s data hoard spread basically across the industry. There is a data-industrial complex, and this is what it looked like."

How You Can Help Fight the Information Wars: Silicon Valley won’t save us. We’re on our own.; The New York Times, December 18, 2018

Kara Swisher, The New York Times;

How You Can Help Fight the Information Wars:

Silicon Valley won’t save us. We’re on our own.

[Kip Currier: A rallying cry to all persons passionate about and/or working on issues related to information literacy and evaluating information...]

"For now, it’s not clear what we can do, except take control of our own individual news consumption. Back in July, in fact, Ms. DiResta advised consumer restraint as the first line of defense, especially when encountering information that any passably intelligent person could guess might have been placed by a group seeking to manufacture discord.

“They’re preying on your confirmation bias,” she said. “When content is being pushed to you, that’s something that you want to see. So, take the extra second to do the fact-check, even if it confirms your worst impulses about something you absolutely hate — before you hit the retweet button, before you hit the share button, just take the extra second.”

If we really are on our own in this age of information warfare, as the Senate reports suggest, there’s only one rule that can help us win it: See something, say nothing."

Facebook’s Data Sharing and Privacy Rules: 5 Takeaways From Our Investigation; The New York Times, December 18, 2018

Nicholas Confessore, Michael LaForgia and Gabriel J.X. Dance, The New York Times;

Facebook’s Data Sharing and Privacy Rules: 5 Takeaways From Our Investigation

 

"You are the product: That is the deal many Silicon Valley companies offer to consumers. The users get free search engines, social media accounts and smartphone apps, and the companies use the personal data they collect — your searches, “likes,” phone numbers and friends — to target and sell advertising.

But an investigation by The New York Times, based on hundreds of pages of internal Facebook documents and interviews with about 50 former employees of Facebook and its partners, reveals that the marketplace for that data is even bigger than many consumers suspected. And Facebook, which collects more information on more people than almost any other private corporation in history, is a central player.

Here are five takeaways from our investigation."

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants; The New York Times, December 18, 2018

Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore, The New York Times; As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

"For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond."

HHS Seeks Feedback Regarding HIPAA Rules; Lexology, December 18, 2018