BRUSSELS, Sept 28 (Reuters) – The European Commission on Wednesday proposed rules making it easier for individuals and companies to sue makers of drones, robots and other products equipped with artificial intelligence software for compensation for harm caused by them.
The AI Liability Directive aims to address the increasing use of AI-enabled products and services and the patchwork of national rules across the 27-country European Union.
Under the draft rules, victims can seek compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology, or for discrimination in a recruitment process using AI.
You can find the EU publication here: New liability rules on products and AI to protect consumers and foster innovation
“We want the same level of protection for victims of damage caused by AI as for victims of old technologies,” Justice Commissioner Didier Reynders told a news conference.
The rules lighten the burden of proof on victims with a “presumption of causality”, which means victims only need to show that a manufacturer or user’s failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit.
Under a “right of access to evidence”, victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and the fault that caused the damage.
The Commission also announced an update to the Product Liability Directive that means manufacturers will be liable for all unsafe products, tangible and intangible, including software and digital services, and also after the products are sold.
Users can sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps. Those with unsafe non-EU products will be able to sue the manufacturer’s EU representative for compensation.
The AI Liability Directive will need to be agreed with EU countries and EU lawmakers before it can become law.
This is quite interesting, especially from a perspective of people who think that AIs should get more far reaching rights, eg the possibility of owning their own copyrights.