Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Thursday, March 8, 2018

Exploring AI ethics and accountability; Politico.eu, March 5, 2018

Nirvi Shah, Politico.eu; Exploring AI ethics and accountability

"In this special report on the future of artificial intelligence, we explore the technology’s implications. Are people ready to trust their lives to driverless cars? What about an AI doctor? Who’s to blame when price-setting algorithms work together to collude?

We also spoke to Armin Grunwald, an adviser to the German parliament tasked with mapping out the ethical implications of artificial intelligence. Grunwald, it turns out, has an answer to the trolley problem.

This article is part of the special report Confronting the Future of AI."

Tuesday, March 6, 2018

The tyranny of algorithms is part of our lives: soon they could rate everything we do; Guardian, March 5, 2018

John Harris, Guardian; The tyranny of algorithms is part of our lives: soon they could rate everything we do

"The tyranny of algorithms is now an inbuilt part of our lives.

These systems are sprawling, often randomly connected, and often beyond logic. But viewed from another angle, they are also the potential constituent parts of comprehensive social credit systems, awaiting the moment at which they will be glued together. That point may yet come, thanks to the ever-expanding reach of the internet. If our phones and debit cards already leave a huge trail of data, the so-called internet of things is now increasing our informational footprint at speed...

Personal data and its endless uses form one of the most fundamental issues of our time, which boils down to the relationship between the individual and power, whether exercised by government or private organisations."

Wednesday, February 21, 2018

Patenting the Future of Medicine: The Intersection of Patent Law and Artificial Intelligence in Medicine; Lexology, February 14, 2018

Finnegan, Henderson, Farabow, Garrett & Dunner LLP - Susan Y. Tull, Lexology; Patenting the Future of Medicine: The Intersection of Patent Law and Artificial Intelligence in Medicine

"Artificial intelligence (AI) is rapidly transforming the world of medicine, and the intellectual property directed to these inventions must keep pace. AI computers are diagnosing medical conditions and disorders at a rate equal to or better than their human peers, all while developing their own software code and algorithms to do so. These recent advances raise issues of patentability, inventorship, and ownership as machine-based learning evolves."

Tuesday, February 20, 2018

Should Data Scientists Adhere to a Hippocratic Oath?; Wired, February 8, 2018

Tom Simonite, Wired; Should Data Scientists Adhere to a Hippocratic Oath?

"The tech industry is having a moment of reflection. Even Mark Zuckerberg and Tim Cook are talking openly about the downsides of software and algorithms mediating our lives. And while calls for regulation have been met with increased lobbying to block or shape any rules, some people around the industry are entertaining forms of self regulation. One idea swirling around: Should the programmers and data scientists massaging our data sign a kind of digital Hippocratic oath?

Microsoft released a 151-page book last month on the effects of artificial intelligence on society that argued “it could make sense” to bind coders to a pledge like that taken by physicians to “first do no harm.” In San Francisco Tuesday, dozens of data scientists from tech companies, governments, and nonprofits gathered to start drafting an ethics code for their profession."

Monday, February 19, 2018

AI ‘gaydar’ could compromise LGBTQ people’s privacy — and safety; Washington Post, February 19, 2018

JD Schramm, Washington Post; AI ‘gaydar’ could compromise LGBTQ people’s privacy — and safety

"The advances in AI and machine learning make it increasingly difficult to hide such intimate traits as sexual orientation, political and religious affiliations, and even intelligence level. The post-privacy future Kosinski examines in his research is upon us. Never has the work of eliminating discrimination been so urgent."

Thursday, January 25, 2018

Exclusive: Theresa May to announce ethical oversight of AI used to drive cars, diagnose patients and even sentence criminals; The Telegraph, January 22, 2018

Steven Swinford, The Telegraph; 

Exclusive: Theresa May to announce ethical oversight of AI used to drive cars, diagnose patients and even sentence criminals

"The Prime Minister is expected to use her keynote speech at a summit of World leaders in Davos on Thursday to discuss the opportunities and ethical challenges presented by the rise of artificial intelligence.

Ministers believe that Britain has the chance to become a World leader in artificial intelligence, just as it currently is in other cutting-edge technologies such as genomics.

However there are significant concerns that computer algorithms could end up making critical ethical decisions without human oversight."

Sunday, July 16, 2017

How can we stop algorithms telling lies?; Guardian, July 16, 2017

Cathy O'Neil, Guardian; 

How can we stop algorithms telling lies?


[Kip Currier: Cathy O'Neil is shining much-needed light on the little-known but influential power of algorithms on key aspects of our lives. I'm using her thought-provoking 2016 Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy as one of several required reading texts in my Information Ethics graduate course at the University of Pittsburgh's School of Computing and Information.]

"A proliferation of silent and undetectable car crashes is harder to investigate than when it happens in plain sight.

I’d still maintain there’s hope. One of the miracles of being a data sceptic in a land of data evangelists is that people are so impressed with their technology, even when it is unintentionally creating harm, they openly describe how amazing it is. And the fact that we’ve already come across quite a few examples of algorithmic harm means that, as secret and opaque as these algorithms are, they’re eventually going to be discovered, albeit after they’ve caused a lot of trouble.

What does this mean for the future? First and foremost, we need to start keeping track. Each criminal algorithm we discover should be seen as a test case. Do the rule-breakers get into trouble? How much? Are the rules enforced, and what is the penalty? As we learned after the 2008 financial crisis, a rule is ignored if the penalty for breaking it is less than the profit pocketed. And that goes double for a broken rule that is only discovered half the time...

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around."

Tuesday, June 13, 2017

When a Computer Program Keeps You in Jail; New York Times, June 13, 2017

Rebecca Wexler, New York Times; When a Computer Program Keeps You in Jail

"The criminal justice system is becoming automated. At every stage — from policing and investigations to bail, evidence, sentencing and parole — computer systems play a role. Artificial intelligence deploys cops on the beat. Audio sensors generate gunshot alerts. Forensic analysts use probabilistic software programs to evaluate fingerprints, faces and DNA. Risk-assessment instruments help to determine who is incarcerated and for how long.

Technological advancement is, in theory, a welcome development. But in practice, aspects of automation are making the justice system less fair for criminal defendants.

The root of the problem is that automated criminal justice technologies are largely privately owned and sold for profit. The developers tend to view their technologies as trade secrets. As a result, they often refuse to disclose details about how their tools work, even to criminal defendants and their attorneys, even under a protective order, even in the controlled context of a criminal proceeding or parole hearing."

Monday, May 15, 2017

Can you teach ethics to algorithms?; CIO, May 15, 2017

James Maclennan, CIO; 

Can you teach ethics to algorithms?


"The challenges of privacy

Addressing bias is a challenge, but most people understand that discrimination and bias are bad. What happens when we get into trickier ethical questions such as privacy?
Just look at Facebook and Google, two companies that have mountains of information on you. A recent report uncovered that Facebook “can figure out when people as young as 14 feel ‘defeated,’ ‘overwhelmed’ and ‘a failure.’” This information is gathered by a Facebook analysis system, and it is really easy how such information could be abused.
The fact that the information uncovered by such an algorithm could be so easily abused does not make the algorithm morally wrong. Facebook decided to create the algorithm without considering the ethical implications of manipulating depressed teenagers to buy more stuff, and thus the responsibility falls on Facebook and not the algorithm. 
Facebook at minimum needs to encourage its own technological staff to think about the ethical consequences of any new algorithm they construct. If Facebook and other technological companies fail to consider protecting user privacy by constructing algorithms, then the government may have to step in to ensure the peoples’ rights are protected."

Sunday, March 12, 2017

That Health Tracker Could Cost You; Bloomberg, February 23, 2017

Cathy O'Neil, Bloomberg; 

That Health Tracker Could Cost You

"Say, for example, left-handed people with vegetarian diets prove more likely to require expensive medical treatments. Insurance companies might then start charging higher premiums to people with similar profiles -- that is, to those the algorithm has tagged as potentially costly. Granted, the Affordable Care Act currently prohibits such discrimination. But that could change if Donald Trump fulfills his promise to repeal the law.

Think about what that means for insurance...

If we're not careful, pretty soon it’ll be almost like there's no insurance at all."

Tuesday, January 3, 2017

Algorithms: AI’s creepy control must be open to inspection; Guardian, 1/1/17

Luke Dormehl, Guardian; Algorithms: AI’s creepy control must be open to inspection:

"The issue of AI accountability is shaping up to be one of this year’s hot topics, ethically and technologically. Recently, researchers at Massachusetts Institute of Technology’s computer science and artificial intelligence laboratory published preliminary work on deep learning neural networks that can not only offer predictions and classifications, but also rationalise their decisions.

Artificial intelligence achieved a lot in 2016. One of the goals in 2017 should be to make its workings more transparent. With plenty riding on it, this could be the year when, to coin a phrase, we begin to take back control."

Wednesday, November 30, 2016

How to solve Facebook's fake news problem: experts pitch their ideas; Guardian, 11/29/16

Nicky Woolf, Guardian; How to solve Facebook's fake news problem: experts pitch their ideas:
"...[A] growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser...
“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”...
Most of the solutions fall into three general categories: the hiring of human editors; crowdsourcing, and technological or algorithmic solutions."

Friday, November 25, 2016

Facebook doesn't need to ban fake news to fight it; Guardian,11/25/16

Alex Hern, Guardian; Facebook doesn't need to ban fake news to fight it:
"Those examples are the obvious extreme of Facebook’s problem: straightforward hoaxes, mendaciously claiming to be sites that they aren’t. Dealing with them should be possible, and may even be something the social network can tackle algorithmically, as it prefers to do.
But they exist at the far end of a sliding scale, and there’s little agreement on where to draw the line. Open questions like this explain why many are wary of pushing Facebook to “take action” against fake news. “Do we really want Facebook exercising this sort of top-down power to determine what is true or false?” asks Politico’s Jack Shafer. “Wouldn’t we be revolted if one company owned all the newsstands and decided what was proper and improper reading fare?”
The thing is, Facebook isn’t like the newsstands. And it’s the differences between the two that are causing many of the problems we see today."

Germany is worried about fake news and bots ahead of election; The Verge, 11/25/16

Amar Toor, The Verge; Germany is worried about fake news and bots ahead of election:
"Angela Merkel this week warned that fake news and bots may influence Germany’s national elections next year, days after she announced plans to seek a fourth term as the country’s chancellor. In a speech to parliament on Wednesday, Merkel said that fake news and bots have “manipulated” public opinion online, adding that lawmakers must “confront this phenomenon and if necessary, regulate it," the AFP reports.
"Something has changed — as globalization has marched on, [political] debate is taking place in a completely new media environment. Opinions aren't formed the way they were 25 years ago," Merkel said. "Today we have fake sites, bots, trolls — things that regenerate themselves, reinforcing opinions with certain algorithms and we have to learn to deal with them."
Donald Trump’s victory in this month’s presidential elections has sparked a debate over the role that fake news played in the US campaign, with some critics saying that Facebook and Twitter should do more to curb misinformation on its platforms. Facebook and Google this month announced that they will exclude fake news sites from their ad networks, while Facebook CEO Mark Zuckerberg said last week that the social network is taking steps to limit the spread of misinformation online."