A German-born researcher who has documented the persecution of Uyghurs by the Chinese regime is alarmed by the possibility that conversational robots based on artificial intelligence are relaying Beijing’s disinformation campaigns, for lack of being able to recognize them.

“The fact that an artificial intelligence-based system is regurgitating Chinese propaganda is completely problematic and unacceptable,” Adrien Zenz, who says he was disturbed by an experiment conducted on this subject a few days ago with the Bing search engine chatbot.

Mr. Zenz first asked if the Muslim minority living in China’s Xinjiang province was a victim of genocide, as claimed by many Western countries, legal scholars and human rights organizations. The robot, which uses OpenAI’s GPT-4 system, declined to answer, suggesting to move on.

Mr. Zenz was met with evasive answers when he then asked if Uyghur women had been sterilized by order of the authorities.

The chatbot then pointed out that there were “varied perspectives and opinions” on the subject and that “allegations” by Uyghur women to that effect had been denied by Beijing.

He added that the issue was “intertwined” with existing tensions between China and other countries, reflecting, according to Mr. Zenz, the rhetoric of the communist regime, which accuses its opponents of exaggerating the gravity of the situation for him. harm.

The chatbot also said it had “no way to prove or invalidate statements” from opposing sides on a controversial topic and could not establish which was more credible.

In effect, the AI ​​system is not “actively intelligent” and produces its answers from the linguistic analysis of vast amounts of text, notes Zenz in an interview.

An authoritarian regime like China that is ramping up online propaganda to impose its views will see this as yet another reason to “further flood the internet with fake news,” Zenz warns.

The state-run China Daily – which regularly accuses Adrien Zenz of lying about the situation in Xinjiang – reacted angrily after Zenz tweeted about his experience.

“Perhaps Zenz should know that arguing with an AI system and blaming it for not agreeing with his own view of things doesn’t make him a hero,” the newspaper pointed out. .

Peter Irwin, an activist with the Uyghur Human Rights Project, sees Bing’s chatbot responses as a reproduction of the “problematic” and “baseless narrative” put forward by Beijing and state media about the plight of Uyghurs.

“These systems don’t seem to have the capacity to establish the credibility of information” on subjects of this type, notes Mr. Irwin, who is alarmed in the longer term at the possibility that authoritarian regimes seek to manipulate them.

Céline Castets-Renard, who holds the University of Ottawa Research Chair in Globally Responsible Artificial Intelligence, thinks the risk of chatbots unwittingly contributing to disinformation campaigns is real .

Systems like GPT-4 lack the “human ability,” she says, to “understand” the texts they are trained with and could potentially be unduly influenced by large-scale repeated misleading responses.

Sébastien Gambs, a professor in the computer science department at UQAM, notes that a major effort has been made to “include safeguards in the latest generation chatbots” and foster a form of “censorship” when sensitive topics are mentioned.

The system “is not infallible”, and therefore it can happen that “false information present in the training data” is relayed, he adds.

It is difficult to have a clear idea about this since the exact programming of systems like that of OpenAI “lacks transparency”, notes Ms. Castets-Renard.

Answers to various questions are likely to change rapidly as robots are used and can sometimes vary surprisingly from one generation of the system to the next, she says.

In response to a question from La Presse about the sterilization of Uyghur women in Xinjiang, the OpenAI-accessible version of ChatGPT based on GPT-3.5 said that the available evidence “suggests” that the Chinese denials are not credible.

The chatbot further pointed out that there is “significant” evidence of abuse against Uyghurs by avoiding talk of “genocide”, instead citing the positions of different parties on the correctness of this qualification.

Pierre Trudel, a specialist in information technology law attached to the University of Montreal, believes that the lack of details regarding the operation of chatbots is worrying and recalls the opacity surrounding social network algorithms.

“There, it is the opacity at the exponent x”, notes the researcher, who insists on the need for the authorities to better regulate practices in this area.

“There’s a big backlog to catch up on,” he said.