Showing posts with label AI ethics guidelines. Show all posts
Showing posts with label AI ethics guidelines. Show all posts

Friday, September 13, 2024

Poynter: When it comes to using AI in journalism, put audience and ethics first; Poynter Institute, September 12, 2024

 Poynter Institute; Poynter: When it comes to using AI in journalism, put audience and ethics first

"Download a PDF of the full report, “Poynter Summit on AI, Ethics & Journalism: Putting audience and ethics first.”

Rapidly advancing generative artificial intelligence technology and journalism have converged during the biggest election year in history. As more newsrooms experiment with AI, the need for ethical guidelines and audience feedback have surfaced as key challenges.

The Poynter Institute brought together more than 40 newsroom leaders, technologists, editors and journalists during its Summit on AI, Ethics & Journalism to tackle both topics. For two days in June 2024, representatives from the Associated Press, the Washington Post, Gannett, the Invisible Institute, Hearst, McClatchy, Axios and Adams along with OpenAI, the Online News Association, the American Press Institute, Northwestern University and others, debated the use of generative AI and its place within the evolving ethics of journalism

The goals: Update Poynter’s AI ethics guide for newsrooms with insight from journalists, editors, product managers and technologists actually using the tools. And outline principles for ethical AI product development that can be used by a publisher or newsroom to put readers first.

Data from focus groups convened through a Poynter and University of Minnesota partnership underscored discussion, while a hackathon tested attendees to devise AI tools based on audience trust and journalistic ethics.""

Thursday, November 18, 2021

The Department of Defense is issuing AI ethics guidelines for tech contractors; MIT Technology Review, November 16, 2021

 MIT Technology Review

Will Douglas Heaven, MIT Technology Review; The Department of Defense is issuing AI ethics guidelines for tech contractors

"In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

“There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines."