How would you feel if your innocent pictures were falsely flagged as child pornography? It would be horrifying. Unfortunately, there are situations where that can easily happen.
Some people, sometimes even prosecutors, don’t understand what child pornography is and isn’t. What is it? Images that intentionally sexualize children or which depict children engaging in sexual acts. What is it not? Innocent images of kids who happen to be naked.
Recently, an artificially intelligent algorithm, or AI, flagged two photos as child pornography even though they were not. The AI, owned by Google, is allowed to flag images it perceives might be child pornography. Once it does, a human is supposed to verify that the image is actually child pornography before any action is taken. But, in these two cases, the human failed to recognize the images as completely innocent and forwarded the case to the authorities.
What were those images? In both cases, they were photos, which had been requested by medical professionals, of children’s genital infections. They had been emailed privately; they hadn’t been made public.
In addition to forwarding these medical images to the police, Google investigated and later punished the fathers who had sent those photos to their children’s doctors. For example, they pored over one of the men’s entire photo collections and then destroyed them.
What did the police do?
In both cases, the police departments immediately cleared the fathers of any criminal acts. Trained police officers recognized that the medical photos were not child pornography.
Nevertheless, the men had gone through a traumatic accusation. They lost their private photos and videos and had their email accounts, and in one case telephone service, cut off. Even though they have been cleared, Google has refused to restore their service or their collection of photos.
Are these isolated cases?
Probably not. Google’s AI is still deeply entrenched in users’ lives. These two cases occurred just one day apart in February. It seems likely there will be more cases.
If you’re a Google user, you can bet Google’s AI has evaluated your photos, too.
Unfortunately, it’s not just Google. A review of 150 Facebook accounts that were flagged for child pornography found that the images were not properly classified as child pornography in 75% of cases. Of 75 LinkedIn accounts reported to EU authorities for child pornography, only 31 actually had any actual child pornography.
This is alarming. In countries that don’t outlaw these practices, Google’s AI could be used to discover any kind of private information about people. That could include whether you’re LGBTQ+, a government dissident, or simply a Google critic.