Nicholas Wallace, Science; Europe plans to strictly regulate high-risk AI technology
"The European Commission today unveiled its plan to strictly regulate
artificial intelligence (AI), distinguishing itself from more
freewheeling approaches to the technology in the United States and
China.
The commission will draft new laws—including a ban on “black box” AI
systems that humans can’t interpret—to govern high-risk uses of the
technology, such as in medical devices and self-driving cars. Although
the regulations would be broader and stricter than any previous EU
rules, European Commission President Ursula von der Leyen said at a
press conference today announcing the plan that the goal is to promote “trust, not fear.” The plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.
The proposals are not final: Over the next 12 weeks, experts, lobby
groups, and the public can weigh in on the plan before the work of
drafting concrete laws begins in earnest. Any final regulation will need
to be approved by the European Parliament and national governments,
which is unlikely to happen this year."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label policymaking. Show all posts
Showing posts with label policymaking. Show all posts
Thursday, February 20, 2020
Monday, January 20, 2020
A Practical Guide for Building Ethical Tech; January 20, 2020
Zvika Krieger, Wired;
""Techlash," the rising public animosity toward big tech companies and their impacts on society, will continue to define the state of the tech world in 2020. Government leaders, historically the stewards of protecting society from the impacts of new innovations, are becoming exasperated at the inability of traditional policymaking to keep up with the unprecedented speed and scale of technological change. In that governance vacuum, corporate leaders are recognizing a growing crisis of trust with the public. Rising consumer demands and employee activism require more aggressive self-regulation.
In response, some companies are creating new offices or executive positions, such as a chief ethics officer, focused on ensuring that ethical considerations are integrated across product development and deployment. Over the past year, the World Economic Forum has convened these new “ethics executives” from over 40 technology companies from across the world to discuss shared challenges of implementing such a far-reaching and nebulous mandate. These executives are working through some of the most contentious issues in the public eye, and ways to drive cultural change within organizations that pride themselves on their willingness to “move fast and break things.”"
A Practical Guide for Building Ethical Tech
Companies
are hiring "chief ethics officers," hoping to regain public trust. The
World Economic Forum's head of technology policy has a few words of
advice.
""Techlash," the rising public animosity toward big tech companies and their impacts on society, will continue to define the state of the tech world in 2020. Government leaders, historically the stewards of protecting society from the impacts of new innovations, are becoming exasperated at the inability of traditional policymaking to keep up with the unprecedented speed and scale of technological change. In that governance vacuum, corporate leaders are recognizing a growing crisis of trust with the public. Rising consumer demands and employee activism require more aggressive self-regulation.
In response, some companies are creating new offices or executive positions, such as a chief ethics officer, focused on ensuring that ethical considerations are integrated across product development and deployment. Over the past year, the World Economic Forum has convened these new “ethics executives” from over 40 technology companies from across the world to discuss shared challenges of implementing such a far-reaching and nebulous mandate. These executives are working through some of the most contentious issues in the public eye, and ways to drive cultural change within organizations that pride themselves on their willingness to “move fast and break things.”"
Thursday, January 18, 2018
In new book, Microsoft cautions humanity to develop AI ethics guidelines now; GeekWire, January 17, 2018
Monica Nickelsburg, GeekWire;
In new book, Microsoft cautions humanity to develop AI ethics guidelines now
"This dangerous scenario is one of many posited in “The Future Computed,” a new book published by Microsoft, with a foreword by Brad Smith, Microsoft president and chief legal officer, and Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research group.
The book examines the use cases and potential dangers of AI technology, which will soon be integrated into many of the systems people use everyday. Microsoft believes AI should be developed with six core principles: “fair, reliable and safe, private and secure, inclusive, transparent, and accountable.”
Nimble policymaking and strong ethical guidelines are essential to ensuring AI doesn’t threaten equity or security, Microsoft says. In other words, we need to start planning now to avoid a scenario like the one facing the imaginary tech company looking for software engineers."
Subscribe to:
Posts (Atom)