The European Union wants to hold companies responsible for the damage caused by AI – Politico

The European Commission on Wednesday proposed a new Grammar That would see makers of software and AI-powered products forced to compensate people affected by their creativity.

A new AI liability directive would make it easier to sue for compensation when a person or organization is harmed or harmed by drones and AI-powered robots or by software such as automated recruitment algorithms.

“The new rules will give victims of harm caused by artificial intelligence systems an equal opportunity, access to a fair trial and compensation,” Commissioner of Justice Didier Reynders told reporters before presenting the proposals.

The bill is the latest attempt by European officials to regulate artificial intelligence and set a global standard for controlling the booming technology. This comes at a time when the European Union is in the throes of negotiations on… AI . lawthe world’s first bill to curb high-risk uses of artificial intelligence, including facial recognition, “social assessment” systems and AI-enhanced software for immigration and social benefits.

“If we want to get real confidence from consumers and users in the application of artificial intelligence, we need to make sure that it is possible to have such access to compensation and reach a real decision in justice if necessary, without too many obstacles,” Reynders said, as Atama systems.

Under the new law, victims will be able to challenge a provider, developer or user of AI technology if they suffer damage to their health or property, or suffer discrimination on the basis of basic rights such as privacy. Until now, it has been very difficult and expensive for victims to build cases when they believe they have been harmed by AI because the technology is complex and opaque.

Courts will have more power to open black boxes of AI companies and demand detailed information about the data used for algorithms, technical specifications and risk control mechanisms.

With this new access to information, victims can demonstrate that the harm came from a technology company selling an AI system or that the AI ​​user – for example, a university, workplace or government agency – failed to comply with obligations under other European laws such as Artificial intelligence law or file Directive to protect platform workers. Victims will also have to prove that the damage is related to specific AI applications.

The European Commission has also introduced renewed guidance on product liability. The 1985 law was not adapted to new product categories such as connected devices, and the revised rules are intended to enable customers to claim compensation when they suffer harm from a defective software update, upgrade or service. The proposed product liability rules also serve to bring online marketplaces in plain sight, which, according to the rules, could be liable if you do not disclose the trader’s name to a person who has been harmed when ordering.

The Commission’s proposal will still need to be approved by national governments in the Council of the European Union and from the European Parliament.

Parliament in particular could object to the European Commission’s choice to propose a weaker liability regime than it itself proposed earlier.

room in 2020 Call The commission is to adopt rules to ensure that victims of malicious AI receive compensation, and specifically requests that developers, providers and users of independent high-risk AI be held legally liable even for unintended harm. But the EU executive decided to take a “pragmatic” approach that was weaker than this strict liability regime, saying the evidence was “not enough to justify” such a regime.

“We chose the lowest level of intervention,” Reynders said. We need to see if there are new developments [will] Justify stronger bases for the future.”

She added that the commission would review whether a stricter system was needed, five years after it came into force.

Peter Hayek contributed to the report.

Leave a Comment