-
First name Mark took pictures of his son’s groin after finding his penis was swollen and hurting, so he could compare the progress of the problem.
-
Mark’s wife then made an appointment for an urgent consultation. The nurse then asked the parents to send photos to allow the doctor to see them before the video consultation – the Covid-19 pandemic requires.
-
The photos helped the doctor diagnose the problem and prescribe antibiotics for Mark’s son.
This rather banal story could have ended there. But no:
-
Two days later, Mark receives a notification on his smartphone: his Google account has been deactivated after finding “serious violations of the rules of use”.
-
Mark quickly realized the problem: his son’s photos were considered child pornography by Google. He knows this because he himself worked as an engineer to develop tools that automatically detect problematic content.
In principle, a human being is supposed to manually check the reports. Mark therefore thought that it would be enough for him to contact the person concerned for this misunderstanding to find a favorable outcome. Serious mistake.
A domino effect on his entire digital life. The father therefore completed the Google form allowing him to oppose the blocking of his account. At the same time, he notes the extent of the sanction:
-
Her emails, contact details of her friends and former colleagues, and documentation of her son’s early years are lost.
-
Worse, his Google FI account – a virtual mobile operator that does not exist in Europe – was also blocked. Mark has no choice but to change his phone number and operator.
-
Without access to his old number and his old e-mail address, he can no longer connect to accounts that have nothing to do with Google, because the double authentication he has activated requires providing codes sent by SMS or email.
-
Google told Mark that it would not unblock his account, without giving him any further explanation.
The bleached father, but Google turns a deaf ear. At the same time, the San Francisco Police Department began an investigation against him. The police investigation began a week after the photos were taken by Mark. The investigator in charge of the case had access to everything Google had on the man: emails, photos, videos, browsing history and geolocation. Result: the police determined that there was no need to continue the investigation.
Despite this decision, which removed all suspicion concerning the incriminated photos, Google has still not agreed to unblock Mark’s account. Ultimately, the father lost an entire chunk of his digital life when he did not act in a wrongful way.
The limits of machine learning. This story raises important questions. In Europe, the authorities are planning to strengthen the fight against online child crime, requiring service providers and platform operators to do everything possible to detect child pornography content. Given the large volume of images and videos that are published daily on the web, such monitoring cannot be provided by human beings. The most efficient means remains automation.
This is where the machine learning, often incorrectly called artificial intelligence. Algorithms are trained to spot content that violates certain rules. The model is confronted with millions of photos and videos, in order to enable it to distinguish acceptable content from that which is not.
In this case, Google’s program was just doing what it was trained to do: detect potentially illegal content. The journalist from New York Times states that he saw the photos of Mark’s son. And to note:
“These are explicit photos of an infant genitalia. But the context is important: these are photos taken by parents concerned about the health of their child.
The importance of context. However, the machine learning is not (for now?) able to detect the context, such an appreciation relies on human observation. However, Mark would not be the only parent faced with this situation. the New York Times cites the case of another concerned father.
Experts interviewed by the American media believe that it is not possible to precisely determine the number of people affected by “false positives”, but some speak of thousands of cases, perhaps more. The problem, they note, is that the victims of such mistakes tend not to publicize it, given the seriousness of the charges against them.
If the experts all agree on the fact that it is necessary to fight against pedocrime, they point the finger at the potential excesses of automation. By scanning the photo albums of users, the algorithms are sure to come across private photos that do not fall under sexual abuse. Unable to tell the difference, they will systematically point them out.
In this case, problematic content is sent to the National Center for Lost and Exploited Children in the United States. The non-profit organization receives more than 29 million reports a year, or 80,000 a day. A figure that should only grow, as platforms are required to demonstrate that they are doing everything they can to detect this content.