Showing posts with label AI ethics guidelines. Show all posts
Showing posts with label AI ethics guidelines. Show all posts

Monday, May 19, 2025

Artificial Intelligence Resources Compiled for Legal Community; Court News Ohio, May 13, 2025

Staff Report , Court News Ohio; Artificial Intelligence Resources Compiled for Legal Community

"Artificial intelligence and generative artificial intelligence (AI, collectively) are rapidly evolving technologies that impact many, if not most, facets of human life. AI’s potential impact on judicial systems is no exception – from how judges and magistrates write opinions, to the briefs and motions prepared by attorneys, to the evidenceprovided by plaintiffs and defendants.

To assist the legal community, an array of resources is now available on the Supreme Court of Ohio website about AI and its use in the courts and legal profession.

The new “Artificial Intelligence Resource Library” offers:

  • AI ethics guidelines for judicial officers and attorneys.
  • AI practices in state courts.
  • Legal association reports and statements.
  • Journal and scholarly articles.
  • Useful courses on the topic.

The library content is organized for three groups: courts; attorneys; and the public (particularly nonlawyers who represent themselves in court)."

Wednesday, February 5, 2025

Google lifts its ban on using AI for weapons; BBC, February 5, 2025

Lucy Hooker & Chris Vallance, BBC; Google lifts its ban on using AI for weapons

"Google's parent company has ditched a longstanding principle and lifted a ban on artificial intelligence (AI) being used for developing weapons and surveillance tools.

Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".

In a blog post Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".

Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems."

Friday, September 13, 2024

Poynter: When it comes to using AI in journalism, put audience and ethics first; Poynter Institute, September 12, 2024

 Poynter Institute; Poynter: When it comes to using AI in journalism, put audience and ethics first

"Download a PDF of the full report, “Poynter Summit on AI, Ethics & Journalism: Putting audience and ethics first.”

Rapidly advancing generative artificial intelligence technology and journalism have converged during the biggest election year in history. As more newsrooms experiment with AI, the need for ethical guidelines and audience feedback have surfaced as key challenges.

The Poynter Institute brought together more than 40 newsroom leaders, technologists, editors and journalists during its Summit on AI, Ethics & Journalism to tackle both topics. For two days in June 2024, representatives from the Associated Press, the Washington Post, Gannett, the Invisible Institute, Hearst, McClatchy, Axios and Adams along with OpenAI, the Online News Association, the American Press Institute, Northwestern University and others, debated the use of generative AI and its place within the evolving ethics of journalism

The goals: Update Poynter’s AI ethics guide for newsrooms with insight from journalists, editors, product managers and technologists actually using the tools. And outline principles for ethical AI product development that can be used by a publisher or newsroom to put readers first.

Data from focus groups convened through a Poynter and University of Minnesota partnership underscored discussion, while a hackathon tested attendees to devise AI tools based on audience trust and journalistic ethics.""

Thursday, November 18, 2021

The Department of Defense is issuing AI ethics guidelines for tech contractors; MIT Technology Review, November 16, 2021

 MIT Technology Review

Will Douglas Heaven, MIT Technology Review; The Department of Defense is issuing AI ethics guidelines for tech contractors

"In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.

The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.

“There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines."