Showing posts with label accountability. Show all posts
Showing posts with label accountability. Show all posts

Saturday, January 19, 2019

Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time; CNBC, January 8, 2019

, CNBC; Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time

"Former employees describe a top-down approach where major decisions are made by the company's leadership, and employees are discouraged from voicing dissent — in direct contradiction to one of Sandberg's mantras, "authentic self."...

"All the things we were preaching, we weren't doing enough of them. We weren't having enough hard conversations. They need to realize that. They need to reflect and ask if they're having hard conversations or just being echo chambers of themselves.""

Tuesday, January 15, 2019

Princeton collaboration brings new insights to the ethics of artificial intelligence; Princeton University, January 14, 2019

Molly Sharlach, Office of Engineering Communications, Princeton University; Princeton collaboration brings new insights to the ethics of artificial intelligence

"Should machines decide who gets a heart transplant? Or how long a person will stay in prison?

The growing use of artificial intelligence in both everyday life and life-altering decisions brings up complex questions of fairness, privacy and accountability. Surrendering human authority to machines raises concerns for many people. At the same time, AI technologies have the potential to help society move beyond human biases and make better use of limited resources.

Princeton Dialogues on AI and Ethics” is an interdisciplinary research project that addresses these issues, bringing engineers and policymakers into conversation with ethicists, philosophers and other scholars. At the project’s first workshop in fall 2017, watching these experts get together and share ideas was “like nothing I’d seen before,” said Ed Felten, director of Princeton’s Center for Information Technology Policy (CITP). “There was a vision for what this collaboration could be that really locked into place.”

The project is a joint venture of CITP and the University Center for Human Values, which serves as “a forum that convenes scholars across the University to address questions of ethics and value” in diverse settings, said director Melissa Lane, the Class of 1943 Professor of Politics. Efforts have included a public conference, held in March 2018, as well as more specialized workshops beginning in 2017 that have convened experts to develop case studies, consider questions related to criminal justice, and draw lessons from the study of bioethics.

“Our vision is to take ethics seriously as a discipline, as a body of knowledge, and to try to take advantage of what humanity has understood over millennia of thinking about ethics, and apply it to emerging technologies,” said Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs. He emphasized that the careful implementation of AI systems can be an opportunity “to achieve better outcomes with less bias and less risk. It’s important not to see this as an entirely negative situation.”"

Thursday, January 10, 2019

Accenture CEO: Diversity and Inclusion Start From Within; Fortune, January 8, 2019

Damanick Dantes, Fortune; Accenture CEO: Diversity and Inclusion Start From Within

"Good leaders succeed by not only treating employees well, but also by measuring the results of building an inclusive work environment. After all, “the real driver of culture [outside of good leadership] is about how it feels to come into work every day,” Sweet says."

Saturday, January 5, 2019

China thinks it can arbitrarily detain anyone. It is time for change: The lack of global outcry over the detention of two Canadians virtually guarantees the next such case; The Guardian, January 3, 2019

Michael Caster, The Guardian;
The lack of global outcry over the detention of two Canadians virtually guarantees the next such case

"Despite this – and although China has detained hundreds of Chinese human rights defenders and numerous foreign nationals under this and similar provisions, not to mention the arbitrary imprisonment or disappearance of some one million Uyghurs and Kazakhs across Xinjiang – it has generated shockingly limited international blowback.

In each case where China has not been held accountable, it virtually guarantees the next.

Any country that systematically denies the rights of its own citizens, and flaunts international norms, should worry us all because such abuses, as we are increasingly seeing, don’t stop at the colour of one’s passport."

Monday, December 31, 2018

Question Technology; Kip Currier, Ethics in a Tangled Web, December 31, 2018


Kip Currier; Question Technology

Ars Technica’s Timothy B. Lee’s 12/30/18 “The hype around driverless cars came crashing down in 2018” is a highly recommended overview of the annus horribilis the year that’s ending constituted for the self-driving vehicles industry. Lee references the Gartner consulting group’s "hype cycle" for new innovations and technology:

In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality. 

We’ve seen the hype cycle replayed over and over again throughout the World Wide Web age (and throughout recorded history), albeit with new players and new innovations. Sometimes the hype delivers. Sometimes it comes with an unexpected price tag and consequences. Social media was hyped by many through branding and slogans. It offers benefits; chief among them, expanded opportunities for communication and connection. But it also has significant weaknesses that can and have been exploited by actors foreign and domestic.

Since 2016, as example, we’ve acutely learned—and are still learning—how social media, such as Facebook, can be used to weaponize information, misinform citizenry, and subvert democracy. From Facebook’s “inflated expectations” Version 1.0 through multiple iterations of hype and rehype, to its 2018 “trough of disillusionment”--which may or may not represent its nadir--much of the public’s perceptions of Facebook appear to finally be aligning with a more realistic picture of the company’s technology, as well as its less than transparent and accountable leadership. Indeed, consider how many times this year, and in the preceding decade and a half, Planet Earth’s social media-using citizens have heard Facebook CEO Mark Zuckerberg essentially say some version of “Trust me. Trust Facebook. We’re going to fix this.” (See CNBC’s 12/19/18 well-documented “Mark Zuckerberg has been talking and apologizing about privacy since 2003 — here’s a reminder of what he’s said) Only for the public, like Charlie Brown, to have the proverbial football once again yanked away with seemingly never-ending revelations of deliberate omissions by Facebook leadership concerning users’ data collection and use.

To better grasp the impacts and lessons we can learn from recognition of the hype cycle, it’s useful to remind ourselves of some other near-recent examples of highly-hyped technologies:

In the past decade, many talked about "the death of the print book"—supplanted by the ebook—and the extinction of independent (i.e. non-Amazon) booksellers. Now, print books are thriving again and independent bookstores are making a gradual comeback in some communities. See the 11/3/18 Observer article "Are E-Books Finally Over? The Publishing Industry Unexpectedly Tilts Back to Print" and Vox’s 12/18/18 piece, “Instagram is helping save the indie bookstore”.

More recently, Mass Open Online Courses (MOOCs) were touted as the game-changer that would have higher education quaking in its ivory tower-climbing boots. See Thomas L. Friedman's 2013 New York Times Opinion piece "Revolution Hits the Universities"; five years later, in 2018, a MOOCs-driven revolution seems less inevitable, or perhaps even less desirable, than postulated when MOOCs had become all the rage in some quarters. Even a few months before Friedman’s article, his New York Times employer had declared 2012 as “The Year of the MOOC”. In pertinent part from that article:


“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”

MOOCs have been around for a few years as collaborative techie learning events, but this is the year everyone wants in. [Note to the author: you might just want to qualify and/or substantiate that hyperbolic assertion a bit about “everyone”!] Elite universities are partnering with Coursera at a furious pace. It now offers courses from 33 of the biggest names in postsecondary education, including Princeton, Brown, Columbia and Duke. In September, Google unleashed a MOOC-building online tool, and Stanford unveiled Class2Go with two courses.

Nick McKeown is teaching one of them, on computer networking, with Philip Levis (the one with a shock of magenta hair in the introductory video). Dr. McKeown sums up the energy of this grand experiment when he gushes, “We’re both very excited.” 

But read on, to the very next two sentences in the piece:

Casually draped over auditorium seats, the professors also acknowledge that they are not exactly sure how this MOOC stuff works.

“We are just going to see how this goes over the next few weeks,” says Dr. McKeown.

Yes, you read that right: 

“…they are not exactly sure how this MOOC stuff works.” And ““We are just going to see how this goes over the next few weeks,” says Dr. McKeown.”

Now, in 2018, who is even talking about MOOCs? Certainly, MOOCs are neither totally dead nor completely out of the education picture. But the fever pitch exhortations around the 1st Coming of the MOOC have ebbed, as hype machines—and change consultants—have inevitably moved on to “the next bright shiny object”.

Technology has many good points, as well as bad points, and, shall we say, aspects that cause legitimate concern. It’s here to stay. I get that. Appreciating the many positive aspects of technology in our lives does not mean that we can’t and shouldn’t still ask questions about the adoption and use of technology. As a mentor of mine often points out, society frequently pushes people to make binary choices, to select either X or Y, when we may, rather, select X and Y. The phrase Question Authority was popularized in the boundary-changing 1960’s. Its pedigree is murky and may actually trace back to ancient Greek society. That’s a topic for another piece by someone else. But the phrase, modified to Question Technology, can serve as an inspirational springboard for today. 

Happily, 2018 also saw more and more calls for AI ethics, data ethics, ethics courses in computer science and other educational programs, and more permutations of ethics in technology. (And that’s not even getting at all the calls for ethics in government!) Arguably, 2018 was the year that ethics was writ large.

In sum, we need to remind ourselves to be wary of anyone or any entity touting that they know with absolute certainty what a new technology will or will not do today, a year from now, or 10+ years in the fast-moving future, particularly absent the provision of hard evidence to support such claims. Just because someone says it’s so doesn’t make it so. Or, that it should be so.

In this era of digitally-dispersed disinformation, misinformation, and “alternate facts”, we all need to remind ourselves to think critically, question pronouncements and projections, and verify the truthfulness of assertions with evidence-based analysis and bonafide facts.


Friday, December 28, 2018

From ethics to accountability, this is how AI will suck less in 2019; Wired, December 27, 2018

Emma Byrne, Wired; From ethics to accountability, this is how AI will suck less in 2019

"If 2018 brought artificial intelligence systems into our homes, 2019 will be the year we think about their place in our lives. Next year, AIs will take on an even greater role: predicting how our climate will change, advising us on our health and controlling our spending. Conversational assistants will do more than ever at our command, but only if businesses and nation states become more transparent about their use. Until now, AIs have remained black boxes. In 2019, they will start to open up.

The coming year is also going to be the year that changes the way we talk about AI. From wide-eyed techno-lust or trembling anticipation of Roko's basilisk, by the end of next year, wild speculation about the future of AI will be replaced by hard decisions about ethics and democracy; 2019 will be the year that AI grows up."

Tuesday, December 11, 2018

When algorithms go wrong we need more power to fight back, say AI researchers; The Verge, Deecember 8, 2018

James Vincent, The Verge; When algorithms go wrong we need more power to fight back, say AI researchers

"Governments and private companies are deploying AI systems at a rapid pace, but the public lacks the tools to hold these systems accountable when they fail. That’s one of the major conclusions in a new report issued by AI Now, a research group home to employees from tech companies like Microsoft and Google and affiliated with New York University.

The report examines the social challenges of AI and algorithmic systems, homing in on what researchers call “the accountability gap” as this technology is integrated “across core social domains.” They put forward ten recommendations, including calling for government regulation of facial recognition (something Microsoft president Brad Smith also advocated for this week) and “truth-in-advertising” laws for AI products, so that companies can’t simply trade on the reputation of the technology to sell their services."

Wednesday, December 5, 2018

Supreme Court hands Fox News another win in copyright case against TVEyes monitoring service; The Washington Post, December 3, 2018

Erik Wemple, The Washington Post; Supreme Court hands Fox News another win in copyright case against TVEyes monitoring service

"The Supreme Court’s decision not to hear the case could leave media critics scrambling. How to fact-check the latest gaffe on “Hannity”? Did Brian Kilmeade really say that? To be sure, cable-news watchers commonly post the most extravagant cable-news moments on Twitter and other social media — a democratic activity that lies outside of the TVEyes ruling, because it’s not a money-making thing. Yet Fox News watchdogs use TVEyes and other services to soak in the full context surrounding those widely circulated clips, and that task is due to get more complicated. That said, services may still provide transcripts without infringing the Fox News copyright."

Saturday, November 24, 2018

If Trump is cornered, the judges he disdains may finally bring him down; The Guardian, November 24, 2018

Walter Shapiro, The Guardian; If Trump is cornered, the judges he disdains may finally bring him down

"Concepts like democracy, a free press, due process, an independent judiciary and the rule of law are lost on Trump. As far as his understanding goes, the constitution might just as well be carved in cuneiform characters on stone tablets."

Friday, November 23, 2018

Confronted with the bloody behavior of autocrats, Trump, instead, blames the world; The Washington Post, November 22, 2018

Kristine Phillips, The Washington Post; Confronted with the bloody behavior of autocrats, Trump, instead, blames the world


[Kip Currier: We must call out and hold accountable those leaders who engage in blurring the boundaries of objective truth, as in the example excerpted below, in which Donald Trump asserts that:

"Maybe the world should be held accountable, because the world is a vicious place."

Such a statement is the amoral apotheosis made manifest of a Gospel of the Inherent Unaccountability of Actors and States:
If everyone is culpable, then no one is culpable.
All are equal in blame.
No one is accountable to anyone else.
No system shall stand in judgment above any other.

Such a nakedly irreproachable manifesto flies in the face of bedrock principles undergirding the rule of law and the U.S. Constitutional system of checks and balances. It is a credo for unchecked anarchy, the very antithesis of originalism. It is the recurrent rhetoric and obfuscatory modus operandum of the oppressor, the despot, the tyrant. The aspiring authoritarian conman.

Its Orwellian aims--to cloud conceptions of "right and wrong", to gum up and break down the imperfect but fine-tuned cogs of systems and rules that hold people responsible for their action and inaction, to "gaslight", confuse, overwhelm with disinformation, demoralize, divide, and manipulate--must be named, called out, and rejected by those who see such self-serving machinations for what they are, and the threats to democracy, the rule of law, and free thinking peoples that they represent.

Inspired by and building upon the prescient words of George Orwell's 1984, to speak truth to power:

Mr. Trump--and those of your ilk, who weaponize facts and wield misinformation to attempt to delegitimize truth and reason--War is NOT peace. Freedom is NOT slavery. Ignorance is NOT strength.]

"In fielding questions from reporters about the killing of Washington Post contributing columnist Jamal Khashoggi, President Trump avoided blaming Mohammed bin Salman, despite the CIA’s findings that the Saudi crown prince had ordered the assassination.

“Who should be held accountable?” a reporter asked Trump Thursday. Sitting inside his Mar-a-Lago resort in Florida, the president took a deep breath, seemingly mulling his response.

Then he said: “Maybe the world should be held accountable, because the world is a vicious place.""