Lavery Lawyers
  October 12, 2023 - Montreal, Quebec

Smart product liability: issues and challenges
  by Léonie Gagné, Laurence Isabelle

Introduction

In 2023, where do we stand in terms of liability where smart products are concerned?

The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.

 These ?smart products,? whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being.

For the purposes of our analysis, the term ?smart products? refers to products that have:

These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as ?AI?).

Applying general law rules of liability to smart products

Although Canada prides itself on being a ?world leader in the field of artificial intelligence,?3 it has yet to enact its first AI law.

The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products.

There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4 

But where does human accountability come in?

Lack of transparency in AI and product liability

When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as ?lack of transparency? or the ?black box? problem associated with AI.5

The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8

Manufacturers have two means to absolve themselves from liability:9

This last means is specifically aimed at the risks inherent to technological innovation.10

That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability.

Challenges in apportioning liability and human accountability

There are cases where the ?smart? component is integrated into a product by one of the manufacturer?s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor.

In this context, it would be reasonable to expect that products? smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability.

Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same ?person.?

There are two types of custodians of smart products:

Either one of these custodians could be held liable should it contribute to the harm through its own fault.

As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex.

C-27: canadian bill on artificial intelligence

Canada?s first AI bill (?Bill C-27?) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act.

If adopted in its current form, the Act would apply to ?high-impact AI systems? (?Systems?) used in the course of international and interprovincial trade.15

Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to ?Systems that can influence human behaviour at scale? and ?Systems critical to health and safety.?16 We have reason to believe that this type of AI is what poses a high risk to users? fundamental rights.

In particular, Bill C-27 would make it possible to prohibit the conduct of a person who ?makes available? a System that is likely to cause ?serious harm? or ?substantial damage.?17

Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill?s six main principles are set out in the list below.18

To this, we add the principle of risk mitigation, considering the legal obligation to ?mitigate? the risks associated with the use of Systems.19

Conclusion

Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries.

That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country.

Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise.





Read full article at: http://www.lavery.ca/en/publications/our-publications/5347-.html