Showing posts with label ways Big Data intersects with ethical considerations. Show all posts
Showing posts with label ways Big Data intersects with ethical considerations. Show all posts

Friday, February 5, 2016

Big Data Ethics: racially biased training data versus machine learning; BoingBoing.net, 2/5/16

Cory Doctorow, BoingBoing.net; Big Data Ethics: racially biased training data versus machine learning:
"Writing in Slate, Cathy "Weapons of Math Destruction" O'Neill, a skeptical data-scientist, describes the ways that Big Data intersects with ethical considerations.
O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics.
We talk a lot about algorithms in the context of Big Data but the algorithms themselves are well-understood and pretty universal -- they're the same ones that are used in mass-surveillance and serving ads. But the training data is subject to the same problems experienced by all sciences when they try to get a good, random sampling to use in their analysis. Just like bad sampling can blow up a medical trial or a psych experiment, it can also confound big data. Rather than calling for algorithmic transparency, we need to call for data transparency, methodological transparency, and sampling transparency."