In the age of ChatGPT and generative artificial intelligence, it is still too rare for people interested in technological challenges to try to mobilize both public space and legislative bodies. Such an effort was recently seen when more than 75 personalities signed a public letter arguing that there is an urgency to pass the Artificial Intelligence and Data Act (LIAD).1

This law, which is still in draft form – C-27 – is part of an effort to modernize Canadian privacy laws. Encouraged by the letter signed by 75 people – artificial intelligence researchers, academics and tech company CEOs, among others – we want to call for a more accountable and democratic development of the LIAD.

The letter asks “our political representatives to support LIAD with conviction. While the parliamentary committee will allow for improvements and amendments, we believe the current proposal is on the right track.” For our part, we are concerned that the law as it is drafted and the efforts to rush its passage limit the debate by emphasizing innovation, while avoiding more meaningful discussions of rights, harms and the broader socio-political implications of artificial intelligence.

In fact, little in the current bill does anything to compel “high-impact systems” to comply with human rights. This is all the more worrisome given that detailed study of the bill falls to the Standing Committee on Industry and Technology rather than the Standing Committee on Access to Information, Privacy and of ethics. This assignment risks aggravating the imbalance that favors an approach based on economic development. The LIAD is deficient in this regard and protection mechanisms are needed to better protect citizens.

For too long it has been assumed that artificial intelligence (AI) serves the public good, despite plenty of evidence to the contrary. How, and especially who can today define what is responsible and “socially beneficial”? The problem here is indeed the one glimpsed in a report by the Center interuniversitaire de recherche sur la science et la technologie (CIRST)2, namely that the network of Canadian AI players is “tightly woven” in addition to being at the both judge and party. In the context of Bill C-27 and the calls to rush its passage, this means that these actors are both promoters and beneficiaries of a permissive law which for the moment offers only an empty shell without further significant obligations.

As raised by researchers at Metropolitan University of Toronto, CDR and CIGI, the bill among other things does not apply to the use of AI by federal government institutions and thus leaves the door open. abuses such as the Royal Canadian Mounted Police’s illegal use of Clearview AI facial recognition technology.

In this case, a number of crucial elements will be drafted after the bill is adopted, so that less transparency will and will result from it, which should come as no surprise since the LIAD only contains about fifteen pages whereas its European equivalent has more than a hundred. Another major failure is that the commissioner responsible for applying the law does not have the necessary independence, being under the direction of the ministry responsible for economic development.

Equally important is the problem of a consultation deficit that seems set to accelerate rather than resolve in the urgency of a moral panic. In their letter, the authors refer to a survey sponsored by the Canadian AI Advisory Council with which they are heavily associated. Polls are certainly the most boorish form of consultation. They never encourage citizen participation and/or deliberation.

The bill, meanwhile, was crafted in the absence of meaningful public consultation, which prevented civil society groups, researchers, and historically marginalized communities from meaningfully contributing to it. The government having missed the opportunity to consult the population at the appropriate time, it is now asking them to absorb its error by giving its approval to an incomplete bill focused on self-governance.

The factitious aspect of the consensus being built is worrying and the State should not be satisfied with it. Other voices exist which are undoubtedly more critical and which deserve to be heard. If it is essential to legislate on the risks represented by AI, it is still necessary to do so in a reflexive, truly inclusive way and in such a way as to preserve the rights of citizens in the face of its deployment.