Daphne Keller, New York Times; Making Google the Censor
"Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.
The fears and frustrations behind these proposals are understandable. But making private companies curtail user expression in important public forums — which is what platforms like Twitter and Facebook have become — is dangerous. The proposed laws would harm free expression and information access for journalists, political dissidents and ordinary users. Policy makers should be candid about these consequences and not pretend that Silicon Valley has silver-bullet technology that can purge the internet of extremist content without taking down important legal speech with it."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label extremist content. Show all posts
Showing posts with label extremist content. Show all posts
Monday, June 12, 2017
Making Google the Censor; New York Times, June 12, 2017
Tuesday, December 6, 2016
Facebook, Twitter, Google and Microsoft team up to tackle extremist content; Guardian, 12/5/16
Olivia Solon, Guardian; Facebook, Twitter, Google and Microsoft team up to tackle extremist content:
"Google, Facebook, Twitter and Microsoft have pledged to work together to identify and remove extremist content on their platforms through an information-sharing initiative. The companies are to create a shared database of unique digital fingerprints – known as “hashes” – for images and videos that promote terrorism. This could include recruitment videos or violent terrorist imagery or memes. When one company identifies and removes such a piece of content, the others will be able to use the hash to identify and remove the same piece of content from their own network... Because the companies have different policies on what constitutes terrorist content, they will start by sharing hashes of “the most extreme and egregious terrorist images and videos” as they are most likely to violate “all of our respective companies” content policies, they said."
Thursday, June 30, 2016
Exclusive: Google, Facebook Quietly Move Toward Automatic Blocking of Extremist Videos; Reuters via New York Times, 6/24/16
Reuters via New York Times; Exclusive: Google, Facebook Quietly Move Toward Automatic Blocking of Extremist Videos:
"Some of the web’s biggest destinations for watching videos have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process. The move is a major step forward for internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States. YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources said. The technology was originally developed to identify and remove copyright-protected content on video sites. It looks for "hashes," a type of unique digital fingerprint that internet companies automatically assign to specific videos, allowing all content with matching fingerprints to be removed rapidly. Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before."
Subscribe to:
Posts (Atom)