Nicholas Wallace, Science; Europe plans to strictly regulate high-risk AI technology
"The European Commission today unveiled its plan to strictly regulate
artificial intelligence (AI), distinguishing itself from more
freewheeling approaches to the technology in the United States and
China.
The commission will draft new laws—including a ban on “black box” AI
systems that humans can’t interpret—to govern high-risk uses of the
technology, such as in medical devices and self-driving cars. Although
the regulations would be broader and stricter than any previous EU
rules, European Commission President Ursula von der Leyen said at a
press conference today announcing the plan that the goal is to promote “trust, not fear.” The plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.
The proposals are not final: Over the next 12 weeks, experts, lobby
groups, and the public can weigh in on the plan before the work of
drafting concrete laws begins in earnest. Any final regulation will need
to be approved by the European Parliament and national governments,
which is unlikely to happen this year."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label AI strategy. Show all posts
Showing posts with label AI strategy. Show all posts
Thursday, February 20, 2020
Subscribe to:
Posts (Atom)