Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Thursday, October 3, 2019

E.U.’s Top Court Rules Against Facebook in Global Takedown Case; The New York Times, October 3, 2019

, The New York Times; E.U.’s Top Court Rules Against Facebook in Global Takedown Case


"The case has been closely watched because of its potential ripple effects for regulating internet content. The enforcement of defamation, libel and privacy laws varies from country to country, with language and behavior that is allowed in one nation prohibited in another. The court’s decision highlights the difficulty of creating uniform standards to govern an inherently borderless web and then enforcing them.

Facebook and other critics have warned that letting a single nation force an internet platform to delete material elsewhere would hurt free expression...

Last week, the European Court of Justice limited the reach of the European privacy law known as the “right to be forgotten,” which allows European citizens to demand Google remove links to sensitive personal data from search results. The court said Google could not be ordered to remove links to websites globally, except in certain circumstances when weighed against the rights to free expression and the public’s right to information."

Tuesday, April 23, 2019

Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart; NBC News, April 19, 2019

David Ingram and Jason Abbruzzese, NBC News; Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart

"Facebook’s missteps have raised awareness about the possible abuse of technology, and created momentum for digital privacy laws in Congress and in state legislatures.

“The surreptitious sharing with third parties because of some ‘gotcha’ in the terms of service is always going to upset people because it seems unfair,” said Michelle Richardson, director of the data and privacy project at the Center for Democracy & Technology.

After the past two years, she said, “you can just see the lightbulb going off over the public’s head.”"

Wednesday, March 6, 2019

Teen who defied anti-vax mom says she got false information from one source: Facebook; The Washington Post, March 5, 2019

Michael Brice-Saddler, The Washington Post; Teen who defied anti-vax mom says she got false information from one source: Facebook

"An 18-year-old from Ohio who famously inoculated himself against his mother’s wishes in December says he attributes his mother’s anti-vaccine ideology to a single source: Facebook.

Ethan Lindenberger, a high school senior, testified Tuesday before the Senate Committee on Health, Education, Labor and Pensions, and underscored the importance of “credible” information. In contrast, he said, the false and deep-rooted beliefs his mother held — that vaccines were dangerous — were perpetuated by social media. Specifically, he said, she turned to anti-vaccine groups on social media for evidence that supported her point of view.

In an interview with The Washington Post on Tuesday, Lindenberger said Facebook, or websites that were linked on Facebook, is really the only source his mother ever relied on for her anti-vaccine information."

Wednesday, February 13, 2019

Facebook under pressure to halt rise of anti-vaccination groups; The Guardian, February 12, 2019

Ed Pilkington and Jessica Glenza, The Guardian; Facebook under pressure to halt rise of anti-vaccination groups

"Dr Noni MacDonald, a professor of pediatrics at Dalhousie University in Halifax, Nova Scotia, Canada, who has worked as an expert adviser to the WHO on immunization, questioned why Facebook was unrestrained by the stringent controls against misinformation put on drug companies. “We don’t let big pharma or big food or big radio companies do this, so why should we let this happen in this venue?”

She added: “When a drug company puts a drug up in the formal media, they can’t tell you something false or they will be sued. So why is this different? Why is this allowed?”"

Thursday, January 31, 2019

Facebook has declared sovereignty; The Washington Post, January 31, 2019

Molly Roberts, The Washington Post; Facebook has declared sovereignty

"That’s a lot of control, as Facebook has implicitly conceded by creating this court. But the court alone cannot close the chasm of accountability that renders Facebook’s preeminence so unsettling. Democracy, at least in theory, allows us to change things we do not like. We can vote out legislators who pass policy we disagree with, or who fail to pass policy at all. We cannot vote out Facebook. We can only quit it.

But can we really? Facebook has grown so large and, in many countries, essential that deleting an account seems to many like an impossibility. Facebook isn’t even just Facebook anymore: It is Instagram and WhatsApp, too. To people in many less developed countries, it is the Internet. Many users may feel more like citizens than customers, in that they cannot just quit. But they are not being governed with their consent.

No court — or oversight board — can change that."

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."

Sunday, January 20, 2019

Facebook Backs University AI Ethics Institute With $7.5 Million; Forbes, January 20, 2019

Sam Shead, Forbes; Facebook Backs University AI Ethics Institute With $7.5 Million

"Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said...

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph Lütge, who will lead the institute. 


"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction." 

Last year, TUM was ranked 6th in the world for AI research by the Times Higher Education magazine behind universities like Carnegie Mellon University in the USA and Nanyang Technological University in Singapore."

Saturday, January 19, 2019

Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time; CNBC, January 8, 2019

, CNBC; Inside Facebook's 'cult-like' workplace, where dissent is discouraged and employees pretend to be happy all the time

"Former employees describe a top-down approach where major decisions are made by the company's leadership, and employees are discouraged from voicing dissent — in direct contradiction to one of Sandberg's mantras, "authentic self."...

"All the things we were preaching, we weren't doing enough of them. We weren't having enough hard conversations. They need to realize that. They need to reflect and ask if they're having hard conversations or just being echo chambers of themselves.""

Tuesday, January 15, 2019

Top ethics and compliance failures of 2018; Compliance Week, December 17, 2018

Jaclyn Jaeger, Compliance Week; Top ethics and compliance failures of 2018 

 

"Shady data privacy practices, allegations of financial misconduct, and widespread money-laundering schemes make up Compliance Week’s list of the top five ethics and compliance failures of 2018. All impart some key compliance lessons."

Monday, December 31, 2018

Question Technology; Kip Currier, Ethics in a Tangled Web, December 31, 2018


Kip Currier; Question Technology

Ars Technica’s Timothy B. Lee’s 12/30/18 “The hype around driverless cars came crashing down in 2018” is a highly recommended overview of the annus horribilis the year that’s ending constituted for the self-driving vehicles industry. Lee references the Gartner consulting group’s "hype cycle" for new innovations and technology:

In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality. 

We’ve seen the hype cycle replayed over and over again throughout the World Wide Web age (and throughout recorded history), albeit with new players and new innovations. Sometimes the hype delivers. Sometimes it comes with an unexpected price tag and consequences. Social media was hyped by many through branding and slogans. It offers benefits; chief among them, expanded opportunities for communication and connection. But it also has significant weaknesses that can and have been exploited by actors foreign and domestic.

Since 2016, as example, we’ve acutely learned—and are still learning—how social media, such as Facebook, can be used to weaponize information, misinform citizenry, and subvert democracy. From Facebook’s “inflated expectations” Version 1.0 through multiple iterations of hype and rehype, to its 2018 “trough of disillusionment”--which may or may not represent its nadir--much of the public’s perceptions of Facebook appear to finally be aligning with a more realistic picture of the company’s technology, as well as its less than transparent and accountable leadership. Indeed, consider how many times this year, and in the preceding decade and a half, Planet Earth’s social media-using citizens have heard Facebook CEO Mark Zuckerberg essentially say some version of “Trust me. Trust Facebook. We’re going to fix this.” (See CNBC’s 12/19/18 well-documented “Mark Zuckerberg has been talking and apologizing about privacy since 2003 — here’s a reminder of what he’s said) Only for the public, like Charlie Brown, to have the proverbial football once again yanked away with seemingly never-ending revelations of deliberate omissions by Facebook leadership concerning users’ data collection and use.

To better grasp the impacts and lessons we can learn from recognition of the hype cycle, it’s useful to remind ourselves of some other near-recent examples of highly-hyped technologies:

In the past decade, many talked about "the death of the print book"—supplanted by the ebook—and the extinction of independent (i.e. non-Amazon) booksellers. Now, print books are thriving again and independent bookstores are making a gradual comeback in some communities. See the 11/3/18 Observer article "Are E-Books Finally Over? The Publishing Industry Unexpectedly Tilts Back to Print" and Vox’s 12/18/18 piece, “Instagram is helping save the indie bookstore”.

More recently, Mass Open Online Courses (MOOCs) were touted as the game-changer that would have higher education quaking in its ivory tower-climbing boots. See Thomas L. Friedman's 2013 New York Times Opinion piece "Revolution Hits the Universities"; five years later, in 2018, a MOOCs-driven revolution seems less inevitable, or perhaps even less desirable, than postulated when MOOCs had become all the rage in some quarters. Even a few months before Friedman’s article, his New York Times employer had declared 2012 as “The Year of the MOOC”. In pertinent part from that article:


“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”

MOOCs have been around for a few years as collaborative techie learning events, but this is the year everyone wants in. [Note to the author: you might just want to qualify and/or substantiate that hyperbolic assertion a bit about “everyone”!] Elite universities are partnering with Coursera at a furious pace. It now offers courses from 33 of the biggest names in postsecondary education, including Princeton, Brown, Columbia and Duke. In September, Google unleashed a MOOC-building online tool, and Stanford unveiled Class2Go with two courses.

Nick McKeown is teaching one of them, on computer networking, with Philip Levis (the one with a shock of magenta hair in the introductory video). Dr. McKeown sums up the energy of this grand experiment when he gushes, “We’re both very excited.” 

But read on, to the very next two sentences in the piece:

Casually draped over auditorium seats, the professors also acknowledge that they are not exactly sure how this MOOC stuff works.

“We are just going to see how this goes over the next few weeks,” says Dr. McKeown.

Yes, you read that right: 

“…they are not exactly sure how this MOOC stuff works.” And ““We are just going to see how this goes over the next few weeks,” says Dr. McKeown.”

Now, in 2018, who is even talking about MOOCs? Certainly, MOOCs are neither totally dead nor completely out of the education picture. But the fever pitch exhortations around the 1st Coming of the MOOC have ebbed, as hype machines—and change consultants—have inevitably moved on to “the next bright shiny object”.

Technology has many good points, as well as bad points, and, shall we say, aspects that cause legitimate concern. It’s here to stay. I get that. Appreciating the many positive aspects of technology in our lives does not mean that we can’t and shouldn’t still ask questions about the adoption and use of technology. As a mentor of mine often points out, society frequently pushes people to make binary choices, to select either X or Y, when we may, rather, select X and Y. The phrase Question Authority was popularized in the boundary-changing 1960’s. Its pedigree is murky and may actually trace back to ancient Greek society. That’s a topic for another piece by someone else. But the phrase, modified to Question Technology, can serve as an inspirational springboard for today. 

Happily, 2018 also saw more and more calls for AI ethics, data ethics, ethics courses in computer science and other educational programs, and more permutations of ethics in technology. (And that’s not even getting at all the calls for ethics in government!) Arguably, 2018 was the year that ethics was writ large.

In sum, we need to remind ourselves to be wary of anyone or any entity touting that they know with absolute certainty what a new technology will or will not do today, a year from now, or 10+ years in the fast-moving future, particularly absent the provision of hard evidence to support such claims. Just because someone says it’s so doesn’t make it so. Or, that it should be so.

In this era of digitally-dispersed disinformation, misinformation, and “alternate facts”, we all need to remind ourselves to think critically, question pronouncements and projections, and verify the truthfulness of assertions with evidence-based analysis and bonafide facts.


Friday, December 21, 2018

Facebook: A Case Study in Ethics ; CMS Wire, December 20, 2018

Laurence Hart, CMS Wire; Facebook: A Case Study in Ethics 

"It feels like every week, a news item emerges that could serve as a case study in ethics. A company's poor decision when exposed to the light of day (provided by the press) seems shockingly bad. The ethical choice in most cases should have been obvious, but it clearly wasn’t the one made.

This week, as in many weeks in 2018, the case study comes from Facebook."