European Commission Looks At EU-Wide Civil Law Rules On Robotics And Artificial Intelligence

EU told to lead the way on liability rules and ethical standards for Artificial Intelligence. The  European Commission delivers its initial response.

In our blog last month, we looked at the Member of the European Parliaments’ (MEPs’) request for the European Commission to lead the way on developing EU-wide liability rules and ethical standards to deal with artificial intelligence.

The EU Commission has now issued a response which identifies the need for legal certainty and the legal challenges it faces in identifying where liability should fall amongst the different market players.

EU Commission response on liability 

The EU Commission wants to capitalise on any work already undertaken which could be used to appropriately allocate liability, casting its attention on the evaluation which is currently underway on the Directive on Liability for Defective Products (Directive) and whether it can apply to new technological developments such as advanced robotics.

However, the EU Commission rightly identifies that the context of autonomous robots which display the ability to interact in unpredictable ways raises important questions as to the suitability of any current EU or national rules.

It’s no surprise that the EU Commission wants to focus initially on the producer as the origin of artificial intelligence and the application of current legislation. However, it seems unlikely that the strict liability approach found in the Directive for producers in respect of damage caused by “defective” products will be particularly useful here, unless we deem all autonomous acts of robots resulting in liability to be a defect.

Other approaches on the cards are a risk-based liability regime based on allocating liability to market actors generating a major risk for others and benefiting from the device and a risk-management approach where liability is assigned to the market actor best placed to minimise risk. Both of these would rely on insurance schemes playing an instrumental role; however, no clarity is provided as to what this starring role would look like.

Ethical standard and protection of fundamental rights

The response from the EU Commission provides little insight into what ethical standards should be adopted, but the Commission is confident that a solid framework is already in place through the Better Regulation Package to assess the impact on fundamental rights of legislative proposal and policy measures.

The Commission response states it will be guided and rely on principles. Most notably, it states that it will rely on the “precautionary principle” that states if there’s the possibility that a given policy or action might cause harm to the public or the environment, and if there’s still no scientific consensus on the issue, the policy or action in question should not be pursued.

Investment will be required to not only understand the technical aspects of artificial intelligence, but also its socio-economic impact.

New legislation timings

The Commission has confirmed in its response that nothing will happen until the stakeholder consultation exercises on product liability challenges in the context of the Internet of Things and autonomous systems and the evaluation of the Directive 85/374/EEC on Liability for Defective Products have concluded.

_________________________________________________

The information in this blog post is provided for general informational purposes only, and may not reflect the current law in your jurisdiction. No information contained in this post should be construed as legal advice from JAG Shaw Baker or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter.

The post was written by Ashley Williams, Associate at JAG Shaw Baker.