Showing posts with label EU's AI Act. Show all posts
Showing posts with label EU's AI Act. Show all posts

Wednesday, February 7, 2024

EU countries strike deal on landmark AI rulebook; Politico, February 2, 2024

GIAN VOLPICELLI, Politico ; EU countries strike deal on landmark AI rulebook

"European Union member countries on Friday unanimously reached a deal on the bloc’s Artificial Intelligence Act, overcoming last-minute fears that the rulebook would stifle European innovation.

EU deputy ambassadors green-lighted the final compromise text, hashed out following lengthy negotiations between representatives of the Council, members of the European Parliament and European Commission officials...

Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote...

Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act...

A spokesperson for German Digital Minister Volker Wissing, the foremost AI Act skeptic within Germany’s coalition government, told POLITICO: "We asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.".

A statement the European Commission, circulated among EU diplomats ahead of the vote and seen by POLITICO, reveals plans to set up an “expert group” comprising  EU member countries’ authorities. The group’s function will be to “ advise and assist” the Commission in applying and implementing the AI Act...

The AI Act still needs the formal approval of the European Parliament. The text is slated to get rubber-stamped at the committee level in two weeks, with a plenary vote expected in April."

Sunday, November 20, 2022

The everyday ethics of AI;

 ; The everyday ethics of AI

"The AI Act is a proposed European law on artificial intelligence. Though it has not yet taken effect, it’s the first such law on AI to be proposed by a major regulator anywhere, and it’s being studied in detail around the world because so many tech companies do extensive business in the EU.

The law assigns applications of AI to four risk categories, Powell said. First, there’s “minimal risk” – benign applications that don’t hurt people. Think AI-enabled video games or spam filters, for example, and understand that the EU proposal allows unlimited use of those applications.

Then there are “limited risk” systems such as chatbots, in which – the AI Act declares — the user must be made aware that they’re interacting with a machine. That would satisfy the EU’s goal that users decide for themselves whether to continue the interaction or step back.

“High risk” systems can cause real harm – and not only physical harm, as can happen in self-driving cars. These systems also can hurt employment prospects (by sorting resumes, for example, or by tracking productivity on a warehouse floor). They can deny credit or loans or the ability to cross an international border. And they can influence criminal-justice outcomes through AI-enhanced investigation and sentencing programs.

According to the EU, “any producer of this type of technology will have to give not just justifications for the technology and its potential harms, but also business justifications as to why the world needs this type of technology,” Powell said.

“This is the first time in history, as far as I know, that companies are held accountable to their products to this extent of having to explain the business logic of their code.”

Then there is the fourth level: “unacceptable risk.” And under the AI Act, all systems that pose a clear threat to the safety, livelihoods and rights of people will be banned, plain and simple.""