With his grave face illuminated by perfect light despite the darkness, in a dramatic pose reminiscent of Jesus on the cross, Donald Trump is escorted by police officers. The image is striking… but it is false. “Editing tricks aren’t that new,” observes Martin Gibert, a researcher in the ethics of artificial intelligence at the University of Montreal. In the pre-computer era, you had to cut images and stitch them together to get a result… usually quite rough. Image editing software like Photoshop helped refine these works. And now, all you have to do is type a description of the image you want to obtain for a generator, inspired by trillions of images taken from the Internet, to create a fairly realistic montage. These new artificial intelligence tools – such as MidJourney, Stable Diffusion, DALL-E… – currently fascinate creators as much as they worry ethicists.

However sophisticated they may be, these artificially intelligent generators are not yet fully developed. By studying their works more closely, one detects their weak point: the hands. They are often deformed and have an abnormal number of fingers, as can be suspected by looking at the photo more closely (especially the hands of the policeman in the background). “It seems difficult for these tools to create correct hands,” says Martin Gibert. But it’s only a matter of time before these software fix these flaws. “In a way, it’s good that this image exists and it’s badly done because it allows us to see how the image algorithm works. »

The wise eye of Martin Benoit, a photography teacher at the Cégep du Vieux Montréal, focuses on another element of the image. “The connection of the head to Trump’s neck is questionable, which initially led me to believe it was a really bad photomontage where someone took a real photo of an arrest and just replaced the head, created a tie and adjusted colors and brightness,” he says. The former president’s flawless hairstyle is also suspect. “The images created by artificial intelligence all have more or less the same difficulties in generating hair well, which is a complex structure to reproduce realistically,” he adds.

There are the tools that create the images, and other tools that dissect them. By using Fotoforensics, which can detect manipulations on an image, Martin Benoit was able to notice different compression rates on certain elements of the image, in particular on the tie, a sign that different files were used. “These detection software are not infallible,” he notes, but they do provide some clues. Moreover, another suspicious element: the low resolution of the broadcast image, which in particular makes the badges of the police officers unreadable. “It’s very convenient to provide a poor quality image, because it hides the flaws of the AI’s ‘errors’,” says Martin Benoit.

All observers agree that as the tools improve, it will become increasingly difficult to distinguish the true from the false. “It reminds us of the importance of the work of journalists,” says Martin Gibert, from UdeM. The source of the image is crucial. Is this a photo released by a credible media outlet? Or another recognized authority? This photomontage, for example, was shared on the Twitter account @TheInfiniteDude, a member of a group that explores the potential of artificial intelligence. For Martin Gibert, one must ask who benefits from the circulation of a faked image. “Because it is particularly useful for platforms that derive advertising revenue from clicks…”