Thursday, January 31, 2019

Facebook has declared sovereignty; The Washington Post, January 31, 2019

Molly Roberts, The Washington Post; Facebook has declared sovereignty

"That’s a lot of control, as Facebook has implicitly conceded by creating this court. But the court alone cannot close the chasm of accountability that renders Facebook’s preeminence so unsettling. Democracy, at least in theory, allows us to change things we do not like. We can vote out legislators who pass policy we disagree with, or who fail to pass policy at all. We cannot vote out Facebook. We can only quit it.

But can we really? Facebook has grown so large and, in many countries, essential that deleting an account seems to many like an impossibility. Facebook isn’t even just Facebook anymore: It is Instagram and WhatsApp, too. To people in many less developed countries, it is the Internet. Many users may feel more like citizens than customers, in that they cannot just quit. But they are not being governed with their consent.

No court — or oversight board — can change that."

An angry historian ripped the ultrarich over tax avoidance at Davos. Then one was given the mic.; The Washington Post, January 31, 2019

Eli Rosenberg, The Washington Post; An angry historian ripped the ultrarich over tax avoidance at Davos. Then one was given the mic.

"Rutger Bregman, a Dutch historian and author who studies poverty and global inequality, had a first this year: being invited to the world’s most prominent gathering of wealthy people — the World Economic Forum’s annual meeting in Switzerland — as a speaker...

[Bregman]...decided to say something during the panel discussion about income inequality he was on, hosted by Time magazine on Friday. He started by saying that he found the conference’s mix of indulgence and global problem-solving a bit bewildering.

“I mean 1,500 private jets have flown in here to hear Sir David Attenborough speak about how we’re wrecking the planet," he said. "I hear people talking the language of participation and justice and equality and transparency. But then almost no one raises the real issue of tax avoidance. And of the rich just not paying their fair share. It feels like I’m at a firefighters conference and no one is allowed to speak about water.

“This is not rocket science,” he said. “We can talk for a very long time about all these stupid philanthropy schemes, we can invite Bono once more, but, come on, we got to be talking about taxes. That’s it. Taxes, taxes, taxes — all the rest is bulls---, in my opinion.”

The doorbells have eyes: The privacy battle brewing over home security cameras; The Washington Post, January 31, 2019

Geoffrey A. Fowler, The Washington Post; The doorbells have eyes: The privacy battle brewing over home security cameras

"We should recognize this pattern: Tech that seems like an obvious good can develop darker dimensions as capabilities improve and data shifts into new hands. A terms-of-service update, a face-recognition upgrade or a hack could turn your doorbell into a privacy invasion you didn’t see coming."

The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK; Mondaq, January 22, 2019

Jocelyn S. Paulley and David Brennan, Gowling WLG, Mondaq; The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK

"What is the CDEI's role?

The CDEI will operate as an independent advisor to the government and will be led by an independent board of expert members with three core functions3:

  • analysing and anticipating risks and opportunities such as gaps in governance and regulation that could impede the ethical and innovative deployment of data and AI;
  • agreeing and articulating best practice such as codes of conduct and standards that can guide ethical and innovative uses of AI; and
  • advising government on the need for action including specific policy or regulatory actions required to address or prevent barriers to innovative and ethical uses of data.
As part of providing these functions, the CDEI will operate under the following principles;

  • appropriately balance objectives for ethical and innovative uses of data and AI to ensure they deliver the greatest benefit for society and the economy;
  • take into account the economic implications of its advice, including the UK's attractiveness as a place to invest in the development of data-driven technologies;
  • provide advice that is independent, impartial, proportionate and evidence-based; and
  • work closely with existing regulators and other institutions to ensure clarity and consistency of guidance
The CDEI's first project will be exploring the use of data in shaping people's online experiences and investigating the potential for bias in decisions made using algorithms. It will also publish its first strategy document by spring 2019 where it will set out how it proposes to operate with other organisations and other institutions recently announced by the government, namely the AI Council and the Office for AI."

Recent events highlight an unpleasant scientific practice: ethics dumping; The Economist, January 31, 2019

The Economist; Recent events highlight an unpleasant scientific practice: ethics dumping

Rich-world scientists conduct questionable experiments in poor countries

"Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university."

State looking for a few ethical people; The Garden Island: Kauai's newspaper since 1901, January 31, 2019

Editorial, The Garden Island: Kauai's newspaper since 1901; State looking for a few ethical people

"Not so easy to say what’s ethical these days, as it seems to depends on one’s standards. Either way, if you’re one of those ethical people, the Hawaii State Ethics Commission wants to hear from you.

The Judicial Council is seeking applicants to fill one upcoming vacancy on the Hawaii State Ethics Commission. The term will run from July 1, 2019 through June 30, 2023...

Some of our brightest minds have tackled the significance of ethics. Here is what a few of them had to say:

w “History shows that where ethics and economics come in conflict, victory is always with economics. Vested interests have never been known to have willingly divested themselves unless there was sufficient force to compel them.” — B. R. Ambedkar

w “The first step in the evolution of ethics is a sense of solidarity with other human beings.” — Albert Schweitzer

w “Non-violence leads to the highest ethics, which is the goal of all evolution. Until we stop harming all other living beings, we are still savages.” — Thomas A. Edison

w “Ethics are more important than laws.” — Wynton Marsalis"

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."

Wednesday, January 30, 2019

Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’; The New York Times, January 29, 2019

Thomas L. Friedman, The New York Times; Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’


[Kip Currier: I just posted this Thomas L. Friedman New York Times piece as a MUST read for the students in my Information Ethics course. The "money quote" regarding the crux of the issue: 

"Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses."

And the call to action for all those who care and can do something, even if it is solely to raise awareness of the promise AND perils of these "deep" technologies:


"This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust."


Friedman leaves out another important "opening"--EDUCATION--and a critically important stakeholder group that is uniquely positioned and which must be ready and able to help prepare citizenry to critically evaluate "deep" technologies and information of all kinds--EDUCATORS.]
 

"Well, even though it’s early, I’m ready to declare the word of the year for 2019.

The word is “deep.”

Why? Because recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before. I’m talking about deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.

Some of these technologies offer unprecedented promise and some unprecedented peril — but they’re all now part of our lives. Everything is going deep...

Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses...

This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.

But deep trust and deep loyalty cannot be forged overnight. They take time. That’s one reason this old newspaper I work for — the Gray Lady — is doing so well today. Not all, but many people, are desperate for trusted navigators.

Many will also look for that attribute in our next president, because they sense that deep changes are afoot. It is unsettling, and yet, there’s no swimming back. We are, indeed, far from the shallow now."


Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones; The New York Times, January 29, 2019

Nicole Perlroth, The New York Times; Apple Was Slow to Act on FaceTime Bug That Allows Spying on iPhones


"A bug this easy to exploit is every company’s worst security nightmare and every spy agency, cybercriminal and stalker’s dream. In emails to Apple’s product security team, Ms. Thompson noted that she and her son were just everyday citizens who believed they had uncovered a flaw that could undermine national security." 

“My fear is that this flaw could be used for nefarious purposes,” she wrote in a letter provided to The New York Times. “Although this certainly raises privacy and security issues for private individuals, there is the potential that this could impact national security if, for example, government members were to fall victim to this eavesdropping flaw."

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

New definition of privacy needed for the social media age; The San Francisco Chronicle, January 28, 2019

Jordan Cunningham, The San Francisco Chronicle; New definition of privacy needed for the social media age

"To bring about meaningful change, we need to fundamentally overhaul the way we define privacy in the social media age.

We need to stop looking at consumers’ data as a commodity and start seeing it as private information that belongs to individuals. We need to look at the impact of technology on young kids with developing brains. And we need to give consumers an easy way to ensure their privacy in homes filled with connected devices.

That’s why I’ve worked with a group of state lawmakers to create the “Your Data, Your Way” package of legislation."

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Meet the data guardians taking on the tech giants; BBC, January 29, 2019

Matthew Wall, BBC; Meet the data guardians taking on the tech giants

"Ever since the world wide web went public in 1993, we have traded our personal data in return for free services from the tech giants. Now a growing number of start-ups think it's about time we took control of our own data and even started making money from it. But do we care enough to bother?"

Big tech firms still don’t care about your privacy; The Washington Post, January 28, 2019

Rob Pegoraro, The Washington Post; Big tech firms still don’t care about your privacy

"Today is Data Privacy Day. Please clap.

This is an actual holiday of sorts, recognized as such in 2007 by the Council of Europe to mark the anniversary of the 1981 opening of Europe’s Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data — the grandfather of such strict European privacy rules as the General Data Protection Regulation.

In the United States, Data Privacy Day has yet to win more official acknowledgment than a few congressional resolutions. It mainly serves as an opportunity for tech companies to publish blog posts about their commitment to helping customers understand their privacy choices.

But in a parallel universe, today might feature different headlines. Consider the following possibilities."

4 Ways AI Education and Ethics Will Disrupt Society in 2019; EdSurge, January 28, 2019

Tara Chklovski, EdSurge; 4 Ways AI Education and Ethics Will Disrupt Society in 2019

 "I see four AI use and ethics trends set to disrupt classrooms and conference rooms. Education focused on deeper learning and understanding of this transformative technology will be critical to furthering the debate and ensuring positive progress that protects social good."

Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain; BoingBoing, January 27, 2019

Cory Doctorow, BoingBoing; Video and audio from my closing keynote at Friday's Grand Re-Opening of the Public Domain

"On Friday, hundreds of us gathered at the Internet Archive, at the invitation of Creative Commons, to celebrate the Grand Re-Opening of the Public Domain, just weeks after the first works entered the American public domain in twenty years.
 

I had the honor of delivering the closing keynote, after a roster of astounding speakers. It was a big challenge and I was pretty nervous, but on reviewing the saved livestream, I'm pretty proud of how it turned out.

Proud enough that I've ripped the audio and posted it to my podcast feed; the video for the keynote is on the Archive and mirrored to Youtube.

The whole event's livestream is also online, and boy do I recommend it."

Monday, January 28, 2019

Ethics as Conversation: A Process for Progress; MIT Sloan Management Review, January 28, 2019

R. Edward Freeman and Bidhan (Bobby). L. Parmar, MIT Sloan Management Review; Ethics as Conversation: A Process for Progress

"We began to use this insight in our conversations with executives and students. We ask them to define what we call “your ethics framework.” Practically, this means defining what set of questions you want to be sure you ask when confronted with a decision or issue that has ethical implications.

The point of asking these questions is partly to anticipate how others might evaluate and interpret your choices and therefore to take those criteria into account as you devise a plan. The questions also help leaders formulate the problem or opportunity in a more nuanced way, which leads to more effective action. You are less likely to be blindsided by negative reactions if you have fully considered a problem.

The exact questions to pose may differ by company, depending on its purpose, its business model, or its more fundamental values. Nonetheless, we suggest seven basic queries that leaders should use to make better decisions on tough issues."

Embedding ethics in computer science curriculum: Harvard initiative seen as a national model; Harvard, John A. Paulson School of Engineering and Applied Sciences, January 28, 2019

Paul Karoff, Harvard, John A. Paulson School of Engineering and Applied Sciences; Embedding ethics in computer science curriculum:
Harvard initiative seen as a national model

"Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”
 
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”"

Sunday, January 27, 2019

Andrew Gillum’s Florida Ethics Troubles Just Got Worse; Slate, January 25, 2019

Mark Joseph Stern, Slate; Andrew Gillum’s Florida Ethics Troubles Just Got Worse

"However Gillum chooses to proceed, it’s clear that Friday’s findings undermine his account and, by extension, his credibility. Throughout the campaign, he insisted that he paid his share of the lavish excursions and never accepted gifts from lobbyists. That narrative is now almost impossible to believe. True, Gillum never performed favors for lobbyists in exchange for their largesse, which would be a federal offense. But even without a quid pro quo, his cozy relationship with lobbyists did not seem to comport with Florida law.

Should Gillum run for office down the road, this blunder will likely be used as a cudgel, risking his ability to win a primary, let alone a general election. Perhaps it is too soon to write off his political career. But if he ever again throws his hat in the ring, his opponents will be ready to pounce with a sordid—and substantiated—tale of corruption."

Five myths about conspiracy theories; The Washington Post, January 17, 2019

Rob Brotherton; Rob Brotherton is an academic psychologist at Barnard College and author of “Suspicious Minds: Why We Believe Conspiracy Theories.”, The Washington Post; Five myths about conspiracy theories

"Conspiracy theories have always been around, but lately, they’ve been getting more attention. As the prevalence of conspiracy thinking among the electorate and even within the highest offices of government has become clear, conspiracism has inspired popular think-pieces and attracted scholars. Along the way, conspiracy theories have also inspired plenty of myths. Here are five."

Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019

Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?

"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...

Too often, we think only about increasing our competitiveness in terms of advancing the technology. But the effort can’t just be about making AI more powerful. It must also be about making sure AI has the right impact. AI’s greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare in which AI rapidly replaces human beings at many jobs and tasks. If we want to realize AI’s incredible potential, we must also advance AI in a way that increases the public’s confidence that AI benefits society. We must have a framework for addressing the impacts and the ethics.

What does an ethics-driven approach to AI look like?

It means asking not only whether AI be can used in certain circumstances, but should it?

Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.

As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."

 

Friday, January 25, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values; The New Yorker, January 24, 2019

Caroline Lester, The New Yorker; A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

"The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The decisions made by Germany will reverberate beyond its borders. Volkswagen sells more automobiles than any other company in the world. But that manufacturing power comes with a complicated moral responsibility. What should a company do if another country wants its vehicles to reflect different moral calculations? Should a Western car de-prioritize the young in an Eastern country? Shariff leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism. But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior."

Thursday, January 24, 2019

Drones unleashed against invasive rats in the Galápagos; Nature, January 24, 2019

Emma Marris, Nature; Drones unleashed against invasive rats in the Galápagos

"One advantage of using drones, Morley says, is that it reduces the need to cut trails through a forest to lay poison baits or traps. He is still working on ways to use the drones to monitor whether projects are successful, playing with acoustic, optical or other sensors that the drones could drop near the poison.

Using drones to kill could also change how conservation scientists view such work, Morley says, comparing the approach to modern warfare. “You used to be able to see your opponent. Now, you just a press a button and you fire a missile,” he says. “You become a little bit detached from the reality that you have killed something or somebody over there.”

That emotional distance could be seen as a benefit of the technology, or as a problem, says Chelsea Batavia, a scholar of conservation ethics at Oregon State University in Corvallis. She feels that people who kill animals for conservation should allow themselves to feel the moral weight of their actions, and even grieve. “Have a conversation about what you are doing and talk through that as a group,” she advises. “Let the impact of what you are doing hit you.”"

Drone Scare Near New York City Shows Hazard Posed to Air Travel; The New York Times, January 23, 2019

Patrick McGeehan and Cade Metz, The New York Times; Drone Scare Near New York City Shows Hazard Posed to Air Travel

"The disruption was all the more alarming because it came just one month after reported drone sightings caused the shutdown of Gatwick Airport in London, one of the busiest in Europe.

The upheaval at Newark illustrated how vulnerable the air-travel system is to the proliferation of inexpensive drones that can weigh as much as 50 pounds and are capable of flying high and fast enough to get in the path of commercial jets, experts on aviation safety and drone technology said. It also raised questions about whether airports are prepared enough to identify drones and prevent them from paralyzing travel and leaving passengers stranded.

“This is a really disturbing trend,” said John Halinski, former deputy administrator of the federal Transportation Security Administration. “It is a real problem because drones are multiplying every day. They really pose a threat in a number of ways to civil aviation.”"

I Found $90 in the Subway. Is It Yours?; The New York Times, January 24, 2019

Niraj Chokshi, The New York Times; I Found $90 in the Subway. Is It Yours?

"As I got off a train in Manhattan on Wednesday, I paid little attention to a flutter out of the corner of my eye on the subway. Then another passenger told me that I had dropped some money.

“That isn’t mine,” I told her as I glanced at what turned out to be $90 on the ground.

I realized the flutter had been the money falling out of the coat of a man standing near me who had just stepped off the train.

The doors were about to close, and no one was acting, so I grabbed the cash and left the train. But I was too late. The man had disappeared into the crowd. I waited a few minutes to see if he would return, but he was long gone. I tried to find a transit employee or police officer, but none were in sight.

I was running late, so I left. But now what? What are you supposed to do with money that isn’t yours?"

This Time It’s Russia’s Emails Getting Leaked; The Daily Beast, January 24, 2019

Kevin Poulsen, The Daily Beast; This Time It’s Russia’s Emails Getting Leaked

"Russian oligarchs and Kremlin apparatchiks may find the tables turned on them later this week when a new leak site unleashes a compilation of hundreds of thousands of hacked emails and gigabytes of leaked documents. Think of it as WikiLeaks, but without Julian Assange’s aversion for posting Russian secrets.

The site, Distributed Denial of Secrets, was founded last month by transparency activists. Co-founder Emma Best said the Russian leaks, slated for release on Friday, will bring into one place dozens of different archives of hacked material that at best has been difficult to locate, and in some cases appears to have disappeared entirely from the web...

Distributed Denial of Secrets, or DDoS, is a volunteer effort that launched last month. Its objective is to provide researchers and journalists with a central repository where they can find the terabytes of hacked and leaked documents that are appearing on the internet with growing regularity. The site is a kind of academic library or a museum for leak scholars, housing such diverse artifacts as the files North Korea stole from Sony in 2014, and a leak from the Special State Protection Service of Azerbaijan."

Trapped in a hoax: survivors of conspiracy theories speak out; The Guardian, January 24, 2019

, The Guardian; Trapped in a hoax: survivors of conspiracy theories speak out

"Conspiracy theories used to be seen as bizarre expressions of harmless eccentrics. Not any more. Gone are the days of outlandish theories about Roswell’s UFOs, the “hoax” moon landings or grassy knolls. Instead, today’s iterations have morphed into political weapons. Turbocharged by social media, they spread with astonishing speed, using death threats as currency.

Together with their first cousins, fake news, they are challenging society’s trust in facts. At its most toxic, this contagion poses a profound threat to democracy by damaging its bedrock: a shared commitment to truth...

Amid this explosive growth, one aspect has been under-appreciated: the human cost. What is the toll paid by those caught up in these falsehoods? And how are they fighting back?"

Tuesday, January 22, 2019

If Mark Zuckerberg Wants to Talk, Britain Is Waiting: Facebook leadership has a history of lashing out instead of opening up; The New York Times, January 22, 2019

Damian Collins, The New York Times; If Mark Zuckerberg Wants to Talk, Britain Is Waiting:

"Mr. Collins is a member of the British Parliament....

So much of our lives is organized through social media, and many people use social media platforms as the main source of information about the world around them. We cannot allow this public space to become a complete wild West, with little or no protection for the citizen user. The rights and responsibilities that we enjoy in the real world need to exist and be protected online as well."

The AI Arms Race Means We Need AI Ethics; Forbes, January 22, 2019

Kasia Borowska, Forbes; The AI Arms Race Means We Need AI Ethics

"In an AI world, the currency is data. Consumers and citizens trade data for convenience and cheaper services. The likes of Facebook, Google, Amazon, Netflix and others process this data to make decisions that influence likes, the adverts we see, purchasing decisions or even who we vote for. There are questions to ask on the implications of everything we access, view or read being controlled by a few global elite. There are also major implications if small companies or emerging markets are unable to compete from being priced out of the data pool. This is why access to AI is so important: not only does it enable more positives from AI to come to the fore, but it also helps to prevent monopolies forming. Despite industry-led efforts, there are no internationally agreed ethical rules to regulate the AI market."

Leading privacy scholar to speak in Dublin; Law Society Gazette Ireland, January 21, 2019

Law Society Gazette Ireland; Leading privacy scholar to speak in Dublin

"“Navigating Privacy in a Data Centric World” is the topic for a talk at Regent House in Trinity College later this month.

On Monday, 28 January at 4pm Jules Polonetsky of the Future of Privacy Forum (FPF) will give a public lecture on how almost every area of technical progress today is reliant on ever broader access to personal information.

Dr Jules Polonetsky is known for his book ‘A Theory of Creepy: Technology, Privacy and Shifting Social Norms’.

He believes that the rapid evolution of digital technologies has thrown up social and ethical dilemmas that we have hardly begun to understand.

Companies, academic researchers, governments and philanthropists utilise ever more sensitive data about individuals’ movements, health, online browsing, home activity and social interactions."