Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Saturday, July 15, 2023

'Not for Machines to Harvest’: Data Revolts Break Out Against A.I.; The New York Times, July 15, 2023

Sheera Frenkel and , The New York Times;  'Not for Machines to Harvest’: Data Revolts Break Out Against A.I.

"At the heart of the rebellions is a newfound understanding that online information — stories, artwork, news articles, message board posts and photos — may have significant untapped value.

The new wave of A.I. — known as “generative A.I.” for the text, images and other content it generates — is built atop complex systems such as large language models, which are capable of producing humanlike prose. These models are trained on hoards of all kinds of data so they can answer people’s questions, mimic writing styles or churn out comedy and poetry...

“What’s happening here is a fundamental realignment of the value of data,” said Brandon Duderstadt, the founder and chief executive of Nomic, an A.I. company...

“The data rebellion that we’re seeing across the country is society’s way of pushing back against this idea that Big Tech is simply entitled to take any and all information from any source whatsoever, and make it their own,” said Ryan Clarkson, the founder of Clarkson...

Eric Goldman, a professor at Santa Clara University School of Law, said the lawsuit’s arguments were expansive and unlikely to be accepted by the court. But the wave of litigation is just beginning, he said, with a “second and third wave” coming that would define A.I.’s future."

Wednesday, January 15, 2020

Ethics In AI: Why Values For Data Matter; Forbes, December 18, 2020

Marc Teerlink, SAP, Global Vice President of Intelligent Enterprise Solutions & Artificial Intelligence, Forbes; Ethics In AI: Why Values For Data Matter

"The Double-Edged Sword of AI and Predictive Analytics

This rising impact can be both a blessing and a concern. It is a blessing — for example when AI and Predictive analytics are using big data to monitor growing conditions, to help an individual farmer make everyday decisions that can determine if they will be able to feed their family (or not).
Yet it can also be real concern when biased information is applied at the outset, leading machines to make biased decisions, amplifying our human prejudices in a manner that is inherently unfair.

As Joaquim Bretcha, president of ESOMAR says, “technology is the reflection of the values, principles, interests and biases of its creators”...

What’s the takeaway from this? We need to apply and own governance principles that focus on providing transparency on how Artificial Intelligence and Predictive Analytics achieve its answer.

I will close by asking one question to ponder when thinking about how to treat data as an asset in your organization:

“How will machines know what we value if we don’t articulate (and own) what we value ourselves?” *

Dig deeper: Want to hear more on ethics in AI, transparency, and treating data as an asset? Watch Marc’s recent masterclass at Web Summit 2019 here

*Liberally borrowed from John C Havens “Heartificial Intelligence”"

Thursday, April 11, 2019

How The Times Thinks About Privacy; The New York Times, April 10, 2019

A.G. Sulzberger, The New York Times; How The Times Thinks About Privacy

We’re examining our policies and practices around data, too. 

"The Times is committed to continue taking steps to increase transparency and protections. And our journalists will do their part to ensure that the public and policymakers are fully informed by covering these issues aggressively, fairly and accurately. Over the coming months, The Privacy Project will feature reporters investigating how digital privacy is being compromised, Op-Ed editors bringing in outside voices to help foster debate and contextualize trade-offs, and opinion writers calling for solutions. All of us at The Times will be reading closely as well, using their findings to help inform the continuing evolution of our own policies and practices."

Do You Know What You’ve Given Up?; The New York Times, April 10, 2019

James Bennet, The New York Times; Do You Know What You’ve Given Up?

""It seems like a good moment to pause and consider the choices we’ve already made, and the ones that lie ahead. That’s why Times Opinion is launching The Privacy Project, a monthslong initiative to explore the technology, to envision where it’s taking us, and to convene debate about how we should control it to best realize, rather than stunt or distort, human potential."

Thursday, January 31, 2019

The doorbells have eyes: The privacy battle brewing over home security cameras; The Washington Post, January 31, 2019

Geoffrey A. Fowler, The Washington Post; The doorbells have eyes: The privacy battle brewing over home security cameras

"We should recognize this pattern: Tech that seems like an obvious good can develop darker dimensions as capabilities improve and data shifts into new hands. A terms-of-service update, a face-recognition upgrade or a hack could turn your doorbell into a privacy invasion you didn’t see coming."

The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK; Mondaq, January 22, 2019

Jocelyn S. Paulley and David Brennan, Gowling WLG, Mondaq; The Role Of The Centre For Data Ethics And Innovation - What It Means For The UK

"What is the CDEI's role?

The CDEI will operate as an independent advisor to the government and will be led by an independent board of expert members with three core functions3:

  • analysing and anticipating risks and opportunities such as gaps in governance and regulation that could impede the ethical and innovative deployment of data and AI;
  • agreeing and articulating best practice such as codes of conduct and standards that can guide ethical and innovative uses of AI; and
  • advising government on the need for action including specific policy or regulatory actions required to address or prevent barriers to innovative and ethical uses of data.
As part of providing these functions, the CDEI will operate under the following principles;

  • appropriately balance objectives for ethical and innovative uses of data and AI to ensure they deliver the greatest benefit for society and the economy;
  • take into account the economic implications of its advice, including the UK's attractiveness as a place to invest in the development of data-driven technologies;
  • provide advice that is independent, impartial, proportionate and evidence-based; and
  • work closely with existing regulators and other institutions to ensure clarity and consistency of guidance
The CDEI's first project will be exploring the use of data in shaping people's online experiences and investigating the potential for bias in decisions made using algorithms. It will also publish its first strategy document by spring 2019 where it will set out how it proposes to operate with other organisations and other institutions recently announced by the government, namely the AI Council and the Office for AI."

Monday, June 4, 2018

4 Big Takeaways from Satya Nadella's Talk at Microsoft Build; Fortune,, May 7, 2018

Jonathan Vanian, Fortune; 4 Big Takeaways from Satya Nadella's Talk at Microsoft Build

"Microsoft Believes in AI and Ethics

Nadella briefly mentioned the company’s internal AI ethics team whose job is to ensure that the company’s foray into cutting-edge techniques like deep learning don’t unintentionally perpetuate societal biases in their products, among other tasks.

 He said that coders need to concentrate on building products that use “good A.I.,” in which the “the choices we make can be good choices for the future.”

Expect more technology companies to talk about AI and ethics as a way to alleviate concerns from the public about the tech industry’s insatiable appetite for data."

Wednesday, April 25, 2018

In global AI race, Europe pins hopes on ethics; Politico, April 25, 2018

Janosch Delcker, Politico; 

In global AI race, Europe pins hopes on ethics


"One of the central goals in the EU strategy is to provide customers with insight into the systems.

That could be easier said than done.

“Algorithmic transparency doesn’t mean [platforms] have to publish their algorithms,” Ansip said, “but ‘explainability’ is something we want to get.”

AI experts say that to achieve such explainability, companies will, indeed, have to disclose the codes they’re using – and more.

Virginia Dignum, an AI researcher at the Delft University of Technology, said “transparency of AI is more than just making the algorithm transparent,” adding that companies should also have to disclose details such as which data was used to train their algorithms, which data are used to make decisions, how this data was collected, or at which point in the process humans were involved in the decision-making process."

Sunday, January 14, 2018

Mashable; What an AI ethics expert thinks of 'Black Mirror' Season 4; January 12, 2018

Angie Han, Mashable; What an AI ethics expert thinks of 'Black Mirror' Season 4

Spoilers in the linked Mashable article 

[Kip Currier: I recently finished watching not-too-distant-future-tech anthology series Black Mirror's six new Season 4 episodes over the course of a week. In terms of audacious creativity, corkscrew concept, and visual effects, "U.S.S. Callister" was the clear "ep-to-remember" of this season. Just as 2017 Emmy Award winner for Outstanding Television Movie, "San Junipero", was the stand-out of Black Mirror Season 3--and, for me, the most memorable (and uncharacteristically upbeat) Black Mirror episode to date. The 80's and 90's "earworm" music callbacks were a big part of San Junipero's charms too!]

"The best Black Mirror episodes don't just leave you wondering whether these futures could happen. They force you to consider what it would mean if they did.

For John C. Havens, these aren't just idle TV musings. He's the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, a program that aims to inspire the creation of IEEE Standards around the design and development of artificial intelligence.

In other words, he and his team are the ones trying to keep us from hurtling, unprepared and unaware, into a Black Mirror dystopia. He also happens to be a big Black Mirror fan, which is why we called him up to ask him all the questions that kept us up at night after we finished Season 4."

Sunday, July 2, 2017

Quiet Please, Episode 89: "If I Should Wake Before I Die"; Old Time Radio Downloads, Air Date: February 27, 1949

Old Time Radio Downloads; Quiet Please, Episode 89: "If I Should Wake Before I Die

[Kip Currier: Heard this cautionary tale--first aired in 1949--on Radio Classics this weekend. Especially prescient and timely, in light of real-world stories like this one, calling for ethics education in IT programs: Lack of ethics education for computer programmers shocks expert]


"Plot: He is the epitome of the word "mad scientist." This top scientist doesn't care whether his inventions have already vastly altered the world, all that he cares about is the vast amount of knowledge that he acquires in his research. What the world sees as a destructive weapon is nothing more but a mere scribble of equations for him. The world is starting to reach a crucial point when human knowledge has become dangerous and unwise."

Thursday, June 15, 2017

Ethics And Artificial Intelligence With IBM Watson's Rob High; Forbes, June 13, 2017

Blake Morgan, Forbes; Ethics And Artificial Intelligence With IBM Watson's Rob High

"Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson...

The future of technology is rooted in artificial intelligence. In order to stay ethical, transparency, proof, and trustworthiness need to be at the root of everything AI does for companies and customers. By staying honest and remembering the goals of AI, the technology can play a huge role in how we live and work."

Monday, March 13, 2017

High Above, Drones Keep Watchful Eyes on Wildlife in Africa; New York Times, March 13, 2017

Rachel Nuwer, New York Times; 

High Above, Drones Keep Watchful Eyes on Wildlife in Africa

"Perhaps the biggest challenge is that conservationists do not know how to most effectively put anti-poaching drones to use, because there have been no rigorous long-term evaluations.

South Africa’s Council for Scientific and Industrial Research conducted a two-month trial with U.D.S. and concluded that the technology is “a remarkable support tool,” but officials have yet to release the data supporting those findings.

Most evidence supporting drones is anecdotal: Mr. Coetzee said he has seen a significant reduction in park incursions when and where drones fly, but added that other factors could have been at play. Drones may deter trespassers, he said, but they may simply go elsewhere in the reserve.

W.W.F. plans to tease out the answers to these questions by evaluating the drones’ effectiveness against poachers here in Liwonde."

Wednesday, March 8, 2017

No One Should Give In to Cyber Extortion Unless It's a Life or Death Situation; Slate, March 7, 2017

Josephine Wolff, Slate; 

No One Should Give In to Cyber Extortion Unless It's a Life or Death Situation


"Paying ransoms and caving to extortion demands just encourages more of the same activity, directed at both previous victims and new ones. The only way to effectively discourage this kind of crime is to make it so fruitless, so unprofitable, so profoundly ineffective that the perpetrators find a new outlet for their energies. And the only way to do that is to stop relying on individual victims and organizations to make these choices themselves and implement policies that explicitly penalize the payment of online ransoms in most circumstances."

Friday, November 18, 2016

Jonathan Nolan Responds To That Westworld Location Theory; Slashfilm.com, 10/17/16

Peter Sciretta, Slashfilm.com; Jonathan Nolan Responds To That Westworld Location Theory:
Minor spoilers re "Westworld" plot themes
[Kip Currier: Viewers of Season 1 of the popular new HBO series "Westworld"--a reimagined reboot of the 1973 film, based on Michael Crichton's eponymous novel--have increasingly seen the protect-at-all-costs value of Westworld's Intellectual Property, as well as privacy concerns. Showrunners Jonathan Nolan and Lisa Joy Nolan touch on these issues below:]
"In regards to the computer terminals where the Delos staff communicate to their loved ones back home, [Lisa Joy Nolan] says:
Regardless of where they are, the park is very, very vast, and you don’t rotate home often. You don’t have open communication where you can just pick up a phone. Even senior people have to go to the coms room – because [the park is] protecting their intellectual property. We’re hoping to paint a portrait of the culture of the corporation.
[Jonathan] Nolan (who was a showrunner on Person of Interest, a series about a computer system that could analyze all forms of public and private data to predict the future) seems to be very interested in the aspect of big brother looking in on our communications. As for how it relates to Westworld, he says the Delos corporation wants to protect its intellectual property and the privacy of the park’s guests:
In Westworld, the value of the park is all in its intellectual property, it’s all in the code. So regardless of the park’s location, they would be extremely careful with that code and making sure its virtually impossible to smuggle it out of the park. And there’s the privacy of the guests – you’re not going to have a good time in Westworld if somebody is Instagramming your activities. I’m amazed [th]at [sic] Las Vegas has survived the Instagram age. In episode 2, when the guests come in, we don’t see this, but we assume these guys have cell phones that they’re not allowed to bring in the park. We very much think this is a path where culture may be going – that we’ll get over-exposed and sick of the interconnectedness of our lives that we’ll hunger for places [that offer disconnected privacy]. We’ll hunger for a moment where we can go back toward having some privacy."

Thursday, September 8, 2016

Trade Secret Protection Blocks Sick Samsung Workers From Data; Claims Journal, 8/12/16

Youkyung Lee, Claims Journal; Trade Secret Protection Blocks Sick Samsung Workers From Data:
"An Associated Press investigation has found South Korean authorities have, at Samsung’s request, repeatedly withheld from workers and their bereaved families crucial information about chemicals they were exposed to at its computer chip and liquid crystal display factories. Sick workers are supposed to have access to such data through the government or the courts so they can apply for workers’ compensation from the state. Without it, government officials commonly reject their cases.
The justification for withholding the information? In at least six cases involving 10 workers, it was trade secrets. Court documents and interviews with government officials, workers’ lawyers and their families show Samsung often cites the need to protect trade secrets when it asks government officials not to release such data.
“Our fight is often against trade secrets. Any contents that may not work in Samsung’s favor were deleted as trade secrets,” said Lim Ja-woon, a lawyer who has represented 15 sick Samsung workers."

Monday, August 29, 2016

Your privacy doesn’t matter at the U.S. border; Pittsburgh Post-Gazette, 8/29/16

Noah Feldman, Pittsburgh Post-Gazette; Your privacy doesn’t matter at the U.S. border:
"The lesson from all this isn’t just that you approach a border at your own risk. It’s that major exceptions to our basic liberties should be interpreted narrowly, not broadly. Searching a reporter’s phone or anyone’s data isn’t within the government’s plausible set of purposes.
There are two ways to fix the problem. One is for Congress to pass a law that prohibits such border searches, as was proposed unsuccessfully in 2008 and 2009.
If Congress won’t act, though, it’s up to the Supreme Court to repair the damage it did in 1886 and 1977. It doesn’t need to overturn its precedent, just narrow it to cover the circumstances that Congress actually had in mind in 1789, namely border searches for goods being shipped illegally or without duty. That doesn’t include data. It would be a big improvement in constitutional doctrine — and civil liberties."