Showing posts with label NIST. Show all posts
Showing posts with label NIST. Show all posts

Friday, October 25, 2024

Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools; The New York Times, October 24, 2024

, The New York Times ; Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools

"President Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting “guardrails” on how such tools are employed in decisions varying from nuclear weapons to granting asylum.

The new document is the latest in a series Mr. Biden has issued grappling with the challenges of using A.I. tools to speed up government operations — whether detecting cyberattacks or predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them...

The new guardrails would also prohibit letting artificial intelligence tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions."

Friday, August 23, 2024

The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws; Wired, August 21, 2024

 Lily Hay Newman, Wired; The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws

"AT THE 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.

The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies."