Showing posts with label AI algorithms. Show all posts
Showing posts with label AI algorithms. Show all posts

Saturday, November 24, 2018

Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.; The Washington Post, November 23, 2018

Drew Harwell, The Washington Post; Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.

"Predictim’s chief and co-founder Sal Parsa said the company, launched last month as part of the University of California at Berkeley’s SkyDeck tech incubator, takes ethical questions about its use of the technology seriously. Parents, he said, should see the ratings as a companion that “may or may not reflect the sitter’s actual attributes.”...

...[T]ech experts say the system raises red flags of its own, including worries that it is preying on parents’ fears to sell personality scans of untested accuracy.

They also question how the systems are being trained and how vulnerable they might be to misunderstanding the blurred meanings of sitters’ social media use. For all but the highest-risk scans, the parents are given only a suggestion of questionable behavior and no specific phrases, links or details to assess on their own."

Monday, May 21, 2018

How the Enlightenment Ends; The Atlantic, June 2018 Issue

Henry A. Kissinger, The Atlantic; How the Enlightenment Ends

 

"Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data...

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late."